Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
On analyticity of linear waves scattered by a layered medium
NASA Astrophysics Data System (ADS)
Nicholls, David P.
2017-10-01
The scattering of linear waves by periodic structures is a crucial phenomena in many branches of applied physics and engineering. In this paper we establish rigorous analytic results necessary for the proper numerical analysis of a class of High-Order Perturbation of Surfaces methods for simulating such waves. More specifically, we prove a theorem on existence and uniqueness of solutions to a system of partial differential equations which model the interaction of linear waves with a multiply layered periodic structure in three dimensions. This result provides hypotheses under which a rigorous numerical analysis could be conducted for recent generalizations to the methods of Operator Expansions, Field Expansions, and Transformed Field Expansions.
ERIC Educational Resources Information Center
Micceri, Theodore; Brigman, Leellen; Spatig, Robert
2009-01-01
An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…
Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.
Bohley, Christian; Heuer, Jana; Stannarius, Ralf
2005-12-01
We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.
Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udey, Ruth Norma
Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.
A Critical Review of Methods to Evaluate the Impact of FDA Regulatory Actions
Briesacher, Becky A.; Soumerai, Stephen B.; Zhang, Fang; Toh, Sengwee; Andrade, Susan E.; Wagner, Joann L.; Shoaibi, Azadeh; Gurwitz, Jerry H.
2013-01-01
Purpose To conduct a synthesis of the literature on methods to evaluate the impacts of FDA regulatory actions, and identify best practices for future evaluations. Methods We searched MEDLINE for manuscripts published between January 1948 and August 2011 that included terms related to FDA, regulatory actions, and empirical evaluation; the review additionally included FDA-identified literature. We used a modified Delphi method to identify preferred methodologies. We included studies with explicit methods to address threats to validity, and identified designs and analytic methods with strong internal validity that have been applied to other policy evaluations. Results We included 18 studies out of 243 abstracts and papers screened. Overall, analytic rigor in prior evaluations of FDA regulatory actions varied considerably; less than a quarter of studies (22%) included control groups. Only 56% assessed changes in the use of substitute products/services, and 11% examined patient health outcomes. Among studies meeting minimal criteria of rigor, 50% found no impact or weak/modest impacts of FDA actions and 33% detected unintended consequences. Among those studies finding significant intended effects of FDA actions, all cited the importance of intensive communication efforts. There are preferred methods with strong internal validity that have yet to be applied to evaluations of FDA regulatory actions. Conclusions Rigorous evaluations of the impact of FDA regulatory actions have been limited and infrequent. Several methods with strong internal validity are available to improve trustworthiness of future evaluations of FDA policies. PMID:23847020
The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234
Mudge, Elizabeth M; Brown, Paula N
2016-01-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823
The Importance of Method Selection in Determining Product Integrity for Nutrition Research.
Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N
2016-03-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prowell, Stacy J; Symons, Christopher T
2015-01-01
Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.
An analytical method to identify and quantify trace levels of C5 to C12 perfluorocarboxylic acids (PFCAs) in articles of commerce (AOC) is developed and rigorously validated. Solid samples were extracted in methanol, and liquid samples were diluted with a solvent consisting of 60...
NASA Astrophysics Data System (ADS)
Antsiferov, SV; Sammal, AS; Deev, PV
2018-03-01
To determine the stress-strain state of multilayer support of vertical shafts, including cross-sectional deformation of the tubing rings as against the design, the authors propose an analytical method based on the provision of the mechanics of underground structures and surrounding rock mass as the elements of an integrated deformable system. The method involves a rigorous solution of the corresponding problem of elasticity, obtained using the mathematical apparatus of the theory of analytic functions of a complex variable. The design method is implemented as a software program allowing multivariate applied computation. Examples of the calculation are given.
Socioeconomic Status and Child Development: A Meta-Analysis
ERIC Educational Resources Information Center
Letourneau, Nicole Lyn; Duffett-Leger, Linda; Levac, Leah; Watson, Barry; Young-Morris, Catherine
2013-01-01
Lower socioeconomic status (SES) is widely accepted to have deleterious effects on the well-being and development of children and adolescents. However, rigorous meta-analytic methods have not been applied to determine the degree to which SES supports or limits children's and adolescents behavioural, cognitive and language development. While…
ERIC Educational Resources Information Center
Ruscio, John; Ruscio, Ayelet Meron; Meron, Mati
2007-01-01
Meehl's taxometric method was developed to distinguish categorical and continuous constructs. However, taxometric output can be difficult to interpret because expected results for realistic data conditions and differing procedural implementations have not been derived analytically or studied through rigorous simulations. By applying bootstrap…
A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses
Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert
2011-01-01
Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325
Canonical Drude Weight for Non-integrable Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri; Porta, Marcello
2018-03-01
The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
Family Meals and Child Academic and Behavioral Outcomes
ERIC Educational Resources Information Center
Miller, Daniel P.; Waldfogel, Jane; Han, Wen-Jui
2012-01-01
This study investigates the link between the frequency of family breakfasts and dinners and child academic and behavioral outcomes in a panel sample of 21,400 children aged 5-15. It complements previous work by examining younger and older children separately and by using information on a large number of controls and rigorous analytic methods to…
High-Contrast Gratings based Spoof Surface Plasmons
NASA Astrophysics Data System (ADS)
Li, Zhuo; Liu, Liangliang; Xu, Bingzheng; Ning, Pingping; Chen, Chen; Xu, Jia; Chen, Xinlei; Gu, Changqing; Qing, Quan
2016-02-01
In this work, we explore the existence of spoof surface plasmons (SSPs) supported by deep-subwavelength high-contrast gratings (HCGs) on a perfect electric conductor plane. The dispersion relation of the HCGs-based SSPs is derived analyt- ically by combining multimode network theory with rigorous mode matching method, which has nearly the same form with and can be degenerated into that of the SSPs arising from deep-subwavelength metallic gratings (MGs). Numerical simula- tions validate the analytical dispersion relation and an effective medium approximation is also presented to obtain the same analytical dispersion formula. This work sets up a unified theoretical framework for SSPs and opens up new vistas in surface plasmon optics.
Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere
NASA Technical Reports Server (NTRS)
Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.
1975-01-01
The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prentice, H. J.; Proud, W. G.
2006-07-28
A technique has been developed to determine experimentally the three-dimensional displacement field on the rear surface of a dynamically deforming plate. The technique combines speckle analysis with stereoscopy, using a modified angular-lens method: this incorporates split-frame photography and a simple method by which the effective lens separation can be adjusted and calibrated in situ. Whilst several analytical models exist to predict deformation in extended or semi-infinite targets, the non-trivial nature of the wave interactions complicates the generation and development of analytical models for targets of finite depth. By interrogating specimens experimentally to acquire three-dimensional strain data points, both analytical andmore » numerical model predictions can be verified more rigorously. The technique is applied to the quasi-static deformation of a rubber sheet and dynamically to Mild Steel sheets of various thicknesses.« less
Analytic theory of orbit contraction
NASA Technical Reports Server (NTRS)
Vinh, N. X.; Longuski, J. M.; Busemann, A.; Culp, R. D.
1977-01-01
The motion of a satellite in orbit, subject to atmospheric force and the motion of a reentry vehicle are governed by gravitational and aerodynamic forces. This suggests the derivation of a uniform set of equations applicable to both cases. For the case of satellite motion, by a proper transformation and by the method of averaging, a technique appropriate for long duration flight, the classical nonlinear differential equation describing the contraction of the major axis is derived. A rigorous analytic solution is used to integrate this equation with a high degree of accuracy, using Poincare's method of small parameters and Lagrange's expansion to explicitly express the major axis as a function of the eccentricity. The solution is uniformly valid for moderate and small eccentricities. For highly eccentric orbits, the asymptotic equation is derived directly from the general equation. Numerical solutions were generated to display the accuracy of the analytic theory.
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
Quantifying construction and demolition waste: an analytical review.
Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen
2014-09-01
Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.
[Problems of food authenticity].
Czerwiecki, Ludwik
2004-01-01
In this review the several data concerning food authenticity were presented. Typical examples of food adulteration were described. The most known are adulteration of vegetable and fruit products, adulteration of wine, honeys, olive oil etc. The modern analytical techniques for detection of food adulteration were discussed. Among physicochemical methods isotopic techniques (SCIRA, IRMS, SNIF-NMR) were cited. The main spectral methods are: IACPAES, PyMs, FTIR, NIR. The chromatographic techniques (GC, HPLC, HPAEC, HPTLC) with several kinds of detectors were described and the ELISA and PCR techniques are mentioned, too. The role of chemometrics as a way of several analytical data processing was highlighted. It was pointed out at the necessity of more rigorous control of food to support of all activity in area of fight with fraud in food industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.
We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less
Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.; ...
2016-02-01
We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flinn, D.G.; Hall, S.; Morris, J.
This volume describes the background research, the application of the proposed loss evaluation techniques, and the results. The research identified present loss calculation methods as appropriate, provided care was taken to represent the various system elements in sufficient detail. The literature search of past methods and typical data revealed that extreme caution in using typical values (load factor, etc.) should be taken to ensure that all factors were referred to the same time base (daily, weekly, etc.). The performance of the method (and computer program) proposed in this project was determined by comparison of results with a rigorous evaluation ofmore » losses on the Salt River Project system. This rigorous evaluation used statistical modeling of the entire system as well as explicit enumeration of all substation and distribution transformers. Further tests were conducted at Public Service Electric and Gas of New Jersey to check the appropriateness of the methods in a northern environment. Finally sensitivity tests indicated data elements inaccuracy of which would most affect the determination of losses using the method developed in this project.« less
Graphical Descriptives: A Way to Improve Data Transparency and Methodological Rigor in Psychology.
Tay, Louis; Parrigon, Scott; Huang, Qiming; LeBreton, James M
2016-09-01
Several calls have recently been issued to the social sciences for enhanced transparency of research processes and enhanced rigor in the methodological treatment of data and data analytics. We propose the use of graphical descriptives (GDs) as one mechanism for responding to both of these calls. GDs provide a way to visually examine data. They serve as quick and efficient tools for checking data distributions, variable relations, and the potential appropriateness of different statistical analyses (e.g., do data meet the minimum assumptions for a particular analytic method). Consequently, we believe that GDs can promote increased transparency in the journal review process, encourage best practices for data analysis, and promote a more inductive approach to understanding psychological data. We illustrate the value of potentially including GDs as a step in the peer-review process and provide a user-friendly online resource (www.graphicaldescriptives.org) for researchers interested in including data visualizations in their research. We conclude with suggestions on how GDs can be expanded and developed to enhance transparency. © The Author(s) 2016.
Computer modeling of lung cancer diagnosis-to-treatment process
Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick
2015-01-01
We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181
Free-Energy Fluctuations and Chaos in the Sherrington-Kirkpatrick Model
NASA Astrophysics Data System (ADS)
Aspelmeier, T.
2008-03-01
The sample-to-sample fluctuations ΔFN of the free-energy in the Sherrington-Kirkpatrick model are shown rigorously to be related to bond chaos. Via this connection, the fluctuations become analytically accessible by replica methods. The replica calculation for bond chaos shows that the exponent μ governing the growth of the fluctuations with system size N, ΔFN˜Nμ, is bounded by μ≤(1)/(4).
Selection and authentication of botanical materials for the development of analytical methods.
Applequist, Wendy L; Miller, James S
2013-05-01
Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.
Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System
NASA Astrophysics Data System (ADS)
Goluskin, David
2018-04-01
We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
ERIC Educational Resources Information Center
Follette, William C.; Bonow, Jordan T.
2009-01-01
Whether explicitly acknowledged or not, behavior-analytic principles are at the heart of most, if not all, empirically supported therapies. However, the change process in psychotherapy is only now being rigorously studied. Functional analytic psychotherapy (FAP; Kohlenberg & Tsai, 1991; Tsai et al., 2009) explicitly identifies behavioral-change…
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
Simple design of slanted grating with simplified modal method.
Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun
2014-02-15
A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.
Theory of ground state factorization in quantum cooperative systems.
Giampaolo, Salvatore M; Adesso, Gerardo; Illuminati, Fabrizio
2008-05-16
We introduce a general analytic approach to the study of factorization points and factorized ground states in quantum cooperative systems. The method allows us to determine rigorously the existence, location, and exact form of separable ground states in a large variety of, generally nonexactly solvable, spin models belonging to different universality classes. The theory applies to translationally invariant systems, irrespective of spatial dimensionality, and for spin-spin interactions of arbitrary range.
Monitoring low density avian populations: An example using Mountain Plovers
Dreitz, V.J.; Lukacs, P.M.; Knopf, F.L.
2006-01-01
Declines in avian populations highlight a need for rigorous, broad-scale monitoring programs to document trends in avian populations that occur in low densities across expansive landscapes. Accounting for the spatial variation and variation in detection probability inherent to monitoring programs is thought to be effort-intensive and time-consuming. We determined the feasibility of the analytical method developed by Royle and Nichols (2003), which uses presence-absence (detection-non-detection) field data, to estimate abundance of Mountain Plovers (Charadrius montanus) per sampling unit in agricultural fields, grassland, and prairie dog habitat in eastern Colorado. Field methods were easy to implement and results suggest that the analytical method provides valuable insight into population patterning among habitats. Mountain Plover abundance was highest in prairie dog habitat, slightly lower in agricultural fields, and substantially lower in grassland. These results provided valuable insight to focus future research into Mountain Plover ecology and conservation. ?? The Cooper Ornithological Society 2006.
2016-04-01
Disclaimer The views expressed in this academic research paper are those of the author and do not reflect the official policy or position of the US...10 Figure 2: Proposed MAT Rating Badges..............................................................................16...establishing unit level certified Masters of Analytic Tradecraft (MAT) analysts to be trained and entrusted to evaluate and rate the standards and
Research strategies that result in optimal data collection from the patient medical record
Gregory, Katherine E.; Radovinsky, Lucy
2010-01-01
Data obtained from the patient medical record are often a component of clinical research led by nurse investigators. The rigor of the data collection methods correlates to the reliability of the data and, ultimately, the analytical outcome of the study. Research strategies for reliable data collection from the patient medical record include the development of a precise data collection tool, the use of a coding manual, and ongoing communication with research staff. PMID:20974093
The Analytic Hierarchy Process and Participatory Decisionmaking
Daniel L. Schmoldt; Daniel L. Peterson; Robert L. Smith
1995-01-01
Managing natural resource lands requires social, as well as biophysical, considerations. Unfortunately, it is extremely difficult to accurately assess and quantify changing social preferences, and to aggregate conflicting opinions held by diverse social groups. The Analytic Hierarchy Process (AHP) provides a systematic, explicit, rigorous, and robust mechanism for...
Chain representations of Open Quantum Systems and Lieb-Robinson like bounds for the dynamics
NASA Astrophysics Data System (ADS)
Woods, Mischa
2013-03-01
This talk is concerned with the mapping of the Hamiltonian of open quantum systems onto chain representations, which forms the basis for a rigorous theory of the interaction of a system with its environment. This mapping progresses as an interaction which gives rise to a sequence of residual spectral densities of the system. The rigorous mathematical properties of this mapping have been unknown so far. Here we develop the theory of secondary measures to derive an analytic, expression for the sequence solely in terms of the initial measure and its associated orthogonal polynomials of the first and second kind. These mappings can be thought of as taking a highly nonlocal Hamiltonian to a local Hamiltonian. In the latter, a Lieb-Robinson like bound for the dynamics of the open quantum system makes sense. We develop analytical bounds on the error to observables of the system as a function of time when the semi-infinite chain in truncated at some finite length. The fact that this is possible shows that there is a finite ``Speed of sound'' in these chain representations. This has many implications of the simulatability of open quantum systems of this type and demonstrates that a truncated chain can faithfully reproduce the dynamics at shorter times. These results make a significant and mathematically rigorous contribution to the understanding of the theory of open quantum systems; and pave the way towards the efficient simulation of these systems, which within the standard methods, is often an intractable problem. EPSRC CDT in Controlled Quantum Dynamics, EU STREP project and Alexander von Humboldt Foundation
Demodulation of messages received with low signal to noise ratio
NASA Astrophysics Data System (ADS)
Marguinaud, A.; Quignon, T.; Romann, B.
The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.
Conflict: Operational Realism versus Analytical Rigor in Defense Modeling and Simulation
2012-06-14
Campbell, Experimental and Quasi- Eperimental Designs for Generalized Causal Inference, Boston: Houghton Mifflin Company, 2002. [7] R. T. Johnson, G...experimentation? In order for an experiment to be considered rigorous, and the results valid, the experiment should be designed using established...addition to the interview, the pilots were administered a written survey, designed to capture their reactions regarding the level of realism present
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.
Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less
Transmission and reflection of terahertz plasmons in two-dimensional plasmonic devices
Sydoruk, Oleksiy; Choonee, Kaushal; Dyer, Gregory Conrad
2015-03-10
We found that plasmons in two-dimensional semiconductor devices will be reflected by discontinuities, notably, junctions between gated and non-gated electron channels. The transmitted and reflected plasmons can form spatially- and frequency-varying signals, and their understanding is important for the design of terahertz detectors, oscillators, and plasmonic crystals. Using mode decomposition, we studied terahertz plasmons incident on a junction between a gated and a nongated channel. The plasmon reflection and transmission coefficients were found numerically and analytically and studied between 0.3 and 1 THz for a range of electron densities. At higher frequencies, we could describe the plasmons by a simplifiedmore » model of channels in homogeneous dielectrics, for which the analytical approximations were accurate. At low frequencies, however, the full geometry and mode spectrum had to be taken into account. Moreover, the results agreed with simulations by the finite-element method. As a result, mode decomposition thus proved to be a powerful method for plasmonic devices, combining the rigor of complete solutions of Maxwell's equations with the convenience of analytical expressions.« less
Numerical Modeling of Sub-Wavelength Anti-Reflective Structures for Solar Module Applications
Han, Katherine; Chang, Chih-Hung
2014-01-01
This paper reviews the current progress in mathematical modeling of anti-reflective subwavelength structures. Methods covered include effective medium theory (EMT), finite-difference time-domain (FDTD), transfer matrix method (TMM), the Fourier modal method (FMM)/rigorous coupled-wave analysis (RCWA) and the finite element method (FEM). Time-based solutions to Maxwell’s equations, such as FDTD, have the benefits of calculating reflectance for multiple wavelengths of light per simulation, but are computationally intensive. Space-discretized methods such as FDTD and FEM output field strength results over the whole geometry and are capable of modeling arbitrary shapes. Frequency-based solutions such as RCWA/FMM and FEM model one wavelength per simulation and are thus able to handle dispersion for regular geometries. Analytical approaches such as TMM are appropriate for very simple thin films. Initial disadvantages such as neglect of dispersion (FDTD), inaccuracy in TM polarization (RCWA), inability to model aperiodic gratings (RCWA), and inaccuracy with metallic materials (FDTD) have been overcome by most modern software. All rigorous numerical methods have accurately predicted the broadband reflection of ideal, graded-index anti-reflective subwavelength structures; ideal structures are tapered nanostructures with periods smaller than the wavelengths of light of interest and lengths that are at least a large portion of the wavelengths considered. PMID:28348287
Kevin T. Smith; Jean Christophe Balouet; Gil Oudijk
2008-01-01
Environmental forensics seeks to determine the responsible parties for contamination from leaks or spills of petroleum or other toxic products. Dendrochemistry contributes to environmental forensics at the intersection of analytical chemistry, tree biology, and environmental responsibility. To be useful, dendrochemistry requires the rigorous application of analytical...
Ennett, S T; Tobler, N S; Ringwalt, C L; Flewelling, R L
1994-01-01
OBJECTIVES. Project DARE (Drug Abuse Resistance Education) is the most widely used school-based drug use prevention program in the United States, but the findings of rigorous evaluations of its effectiveness have not been considered collectively. METHODS. We used meta-analytic techniques to review eight methodologically rigorous DARE evaluations. Weighted effect size means for several short-term outcomes also were compared with means reported for other drug use prevention programs. RESULTS. The DARE effect size for drug use behavior ranged from .00 to .11 across the eight studies; the weighted mean for drug use across studies was .06. For all outcomes considered, the DARE effect size means were substantially smaller than those of programs emphasizing social and general competencies and using interactive teaching strategies. CONCLUSIONS. DARE's short-term effectiveness for reducing or preventing drug use behavior is small and is less than for interactive prevention programs. PMID:8092361
A method for calculating strut and splitter plate noise in exit ducts: Theory and verification
NASA Technical Reports Server (NTRS)
Fink, M. R.
1978-01-01
Portions of a four-year analytical and experimental investigation relative to noise radiation from engine internal components in turbulent flow are summarized. Spectra measured for such airfoils over a range of chord, thickness ratio, flow velocity, and turbulence level were compared with predictions made by an available rigorous thin-airfoil analytical method. This analysis included the effects of flow compressibility and source noncompactness. Generally good agreement was obtained. This noise calculation method for isolated airfoils in turbulent flow was combined with a method for calculating transmission of sound through a subsonic exit duct and with an empirical far-field directivity shape. These three elements were checked separately and were individually shown to give close agreement with data. This combination provides a method for predicting engine internally generated aft-radiated noise from radial struts and stators, and annular splitter rings. Calculated sound power spectra, directivity, and acoustic pressure spectra were compared with the best available data. These data were for noise caused by a fan exit duct annular splitter ring, larger-chord stator blades, and turbine exit struts.
Tan, Ming T; Liu, Jian-ping; Lao, Lixing
2012-08-01
Recently, proper use of the statistical methods in traditional Chinese medicine (TCM) randomized controlled trials (RCTs) has received increased attention. Statistical inference based on hypothesis testing is the foundation of clinical trials and evidence-based medicine. In this article, the authors described the methodological differences between literature published in Chinese and Western journals in the design and analysis of acupuncture RCTs and the application of basic statistical principles. In China, qualitative analysis method has been widely used in acupuncture and TCM clinical trials, while the between-group quantitative analysis methods on clinical symptom scores are commonly used in the West. The evidence for and against these analytical differences were discussed based on the data of RCTs assessing acupuncture for pain relief. The authors concluded that although both methods have their unique advantages, quantitative analysis should be used as the primary analysis while qualitative analysis can be a secondary criterion for analysis. The purpose of this paper is to inspire further discussion of such special issues in clinical research design and thus contribute to the increased scientific rigor of TCM research.
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
NASA Astrophysics Data System (ADS)
Schwarz, Karsten; Rieger, Heiko
2013-03-01
We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.
Dynamic Characteristics of Micro-Beams Considering the Effect of Flexible Supports
Zhong, Zuo-Yang; Zhang, Wen-Ming; Meng, Guang
2013-01-01
Normally, the boundaries are assumed to allow small deflections and moments for MEMS beams with flexible supports. The non-ideal boundary conditions have a significant effect on the qualitative dynamical behavior. In this paper, by employing the principle of energy equivalence, rigorous theoretical solutions of the tangential and rotational equivalent stiffness are derived based on the Boussinesq's and Cerruti's displacement equations. The non-dimensional differential partial equation of the motion, as well as coupled boundary conditions, are solved analytically using the method of multiple time scales. The closed-form solution provides a direct insight into the relationship between the boundary conditions and vibration characteristics of the dynamic system, in which resonance frequencies increase with the nonlinear mechanical spring effect but decrease with the effect of flexible supports. The obtained results of frequencies and mode shapes are compared with the cases of ideal boundary conditions, and the differences between them are contrasted on frequency response curves. The influences of the support material property on the equivalent stiffness and resonance frequency shift are also discussed. It is demonstrated that the proposed model with the flexible supports boundary conditions has significant effect on the rigorous quantitative dynamical analysis of the MEMS beams. Moreover, the proposed analytical solutions are in good agreement with those obtained from finite element analyses.
Quantifying construction and demolition waste: An analytical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Zezhou; Yu, Ann T.W., E-mail: bsannyu@polyu.edu.hk; Shen, Liyin
2014-09-15
Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and projectmore » levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leblond, Herve; Kremer, David; Mihalache, Dumitru
2010-03-15
By using a reductive perturbation method, we derive from Maxwell-Bloch equations a cubic generalized Kadomtsev-Petviashvili equation for ultrashort spatiotemporal optical pulse propagation in cubic (Kerr-like) media without the use of the slowly varying envelope approximation. We calculate the collapse threshold for the propagation of few-cycle spatiotemporal pulses described by the generic cubic generalized Kadomtsev-Petviashvili equation by a direct numerical method and compare it to analytic results based on a rigorous virial theorem. Besides, typical evolution of the spectrum (integrated over the transverse spatial coordinate) is given and a strongly asymmetric spectral broadening of ultrashort spatiotemporal pulses during collapse is evidenced.
ERIC Educational Resources Information Center
OECD Publishing, 2017
2017-01-01
What is important for citizens to know and be able to do? The OECD Programme for International Student Assessment (PISA) seeks to answer that question through the most comprehensive and rigorous international assessment of student knowledge and skills. The PISA 2015 Assessment and Analytical Framework presents the conceptual foundations of the…
Analytic theory of orbit contraction and ballistic entry into planetary atmospheres
NASA Technical Reports Server (NTRS)
Longuski, J. M.; Vinh, N. X.
1980-01-01
A space object traveling through an atmosphere is governed by two forces: aerodynamic and gravitational. On this premise, equations of motion are derived to provide a set of universal entry equations applicable to all regimes of atmospheric flight from orbital motion under the dissipate force of drag through the dynamic phase of reentry, and finally to the point of contact with the planetary surface. Rigorous mathematical techniques such as averaging, Poincare's method of small parameters, and Lagrange's expansion, applied to obtain a highly accurate, purely analytic theory for orbit contraction and ballistic entry into planetary atmospheres. The theory has a wide range of applications to modern problems including orbit decay of artificial satellites, atmospheric capture of planetary probes, atmospheric grazing, and ballistic reentry of manned and unmanned space vehicles.
Seven perspectives on GPCR H/D-exchange proteomics methods
Zhang, Xi
2017-01-01
Recent research shows surging interest to visualize human G protein-coupled receptor (GPCR) dynamic structures using the bottom-up H/D-exchange (HDX) proteomics technology. This opinion article clarifies critical technical nuances and logical thinking behind the GPCR HDX proteomics method, to help scientists overcome cross-discipline pitfalls, and understand and reproduce the protocol at high quality. The 2010 89% HDX structural coverage of GPCR was achieved with both structural and analytical rigor. This article emphasizes systematically considering membrane protein structure stability and compatibility with chromatography and mass spectrometry (MS) throughout the pipeline, including the effects of metal ions, zero-detergent shock, and freeze-thaws on HDX result rigor. This article proposes to view bottom-up HDX as two steps to guide choices of detergent buffers and chromatography settings: (I) protein HDX labeling in native buffers, and (II) peptide-centric analysis of HDX labels, which applies (a) bottom-up MS/MS to construct peptide matrix and (b) HDX MS to locate and quantify H/D labels. The detergent-low-TCEP digestion method demystified the challenge of HDX-grade GPCR digestion. GPCR HDX proteomics is a structural approach, thus its choice of experimental conditions should let structure lead and digestion follow, not the opposite. PMID:28529698
Agut, C; Caron, A; Giordano, C; Hoffman, D; Ségalini, A
2011-09-10
In 2001, a multidisciplinary team made of analytical scientists and statisticians at Sanofi-aventis has published a methodology which has governed, from that time, the transfers from R&D sites to Manufacturing sites of the release monographs. This article provides an overview of the recent adaptations brought to this original methodology taking advantage of our experience and the new regulatory framework, and, in particular, the risk management perspective introduced by ICH Q9. Although some alternate strategies have been introduced in our practices, the comparative testing one, based equivalence testing as statistical approach, remains the standard for assays lying on very critical quality attributes. This is conducted with the concern to control the most important consumer's risk involved at two levels in analytical decisions in the frame of transfer studies: risk, for the receiving laboratory, to take poor release decisions with the analytical method and risk, for the sending laboratory, to accredit such a receiving laboratory on account of its insufficient performances with the method. Among the enhancements to the comparative studies, the manuscript presents the process settled within our company for a better integration of the transfer study into the method life-cycle, just as proposals of generic acceptance criteria and designs for assay and related substances methods. While maintaining rigor and selectivity of the original approach, these improvements tend towards an increased efficiency in the transfer operations. Copyright © 2011 Elsevier B.V. All rights reserved.
2015-11-01
method for composite media. Analytical calculations of the spectral measure underlying the effective diffusivity tensor D∗ have been obtained only for...1uj]χk〉 = −〈∇∆−1uj · ∇χk〉 = 〈(−∆)−1uj, χk〉1.(C.5) This calculation will be rigorously justified in Theorem C.1 below. Substituting the formula for...This justifies the calculation in equation (C.5) (see also the discussion leading up to equation (D.2). We have already established in Section B that
The transition of oncologic imaging from its “industrial era” to it is “information era” demands analytical methods that 1) extract information from this data that is clinically and biologically relevant; 2) integrate imaging, clinical, and genomic data via rigorous statistical and computational methodologies in order to derive models valuable for understanding cancer mechanisms, diagnosis, prognostic assessment, response evaluation, and personalized treatment management; 3) are available to the biomedical community for easy use and application, with the aim of understanding, diagnosing, an
ERIC Educational Resources Information Center
Arbaugh, J. B.; Hwang, Alvin
2013-01-01
Seeking to assess the analytical rigor of empirical research in management education, this article reviews the use of multivariate statistical techniques in 85 studies of online and blended management education over the past decade and compares them with prescriptions offered by both the organization studies and educational research communities.…
ERIC Educational Resources Information Center
Martin, Florence; Ndoye, Abdou; Wilkins, Patricia
2016-01-01
Quality Matters is recognized as a rigorous set of standards that guide the designer or instructor to design quality online courses. We explore how Quality Matters standards guide the identification and analysis of learning analytics data to monitor and improve online learning. Descriptive data were collected for frequency of use, time spent, and…
Russell, Shane R; Claridge, Shelley A
2016-04-01
Because noncovalent interface functionalization is frequently required in graphene-based devices, biomolecular self-assembly has begun to emerge as a route for controlling substrate electronic structure or binding specificity for soluble analytes. The remarkable diversity of structures that arise in biological self-assembly hints at the possibility of equally diverse and well-controlled surface chemistry at graphene interfaces. However, predicting and analyzing adsorbed monolayer structures at such interfaces raises substantial experimental and theoretical challenges. In contrast with the relatively well-developed monolayer chemistry and characterization methods applied at coinage metal surfaces, monolayers on graphene are both less robust and more structurally complex, levying more stringent requirements on characterization techniques. Theory presents opportunities to understand early binding events that lay the groundwork for full monolayer structure. However, predicting interactions between complex biomolecules, solvent, and substrate is necessitating a suite of new force fields and algorithms to assess likely binding configurations, solvent effects, and modulations to substrate electronic properties. This article briefly discusses emerging analytical and theoretical methods used to develop a rigorous chemical understanding of the self-assembly of peptide-graphene interfaces and prospects for future advances in the field.
Rate decline curves analysis of multiple-fractured horizontal wells in heterogeneous reservoirs
NASA Astrophysics Data System (ADS)
Wang, Jiahang; Wang, Xiaodong; Dong, Wenxiu
2017-10-01
In heterogeneous reservoir with multiple-fractured horizontal wells (MFHWs), due to the high density network of artificial hydraulic fractures, the fluid flow around fracture tips behaves like non-linear flow. Moreover, the production behaviors of different artificial hydraulic fractures are also different. A rigorous semi-analytical model for MFHWs in heterogeneous reservoirs is presented by combining source function with boundary element method. The model are first validated by both analytical model and simulation model. Then new Blasingame type curves are established. Finally, the effects of critical parameters on the rate decline characteristics of MFHWs are discussed. The results show that heterogeneity has significant influence on the rate decline characteristics of MFHWs; the parameters related to the MFHWs, such as fracture conductivity and length also can affect the rate characteristics of MFHWs. One novelty of this model is to consider the elliptical flow around artificial hydraulic fracture tips. Therefore, our model can be used to predict rate performance more accurately for MFHWs in heterogeneous reservoir. The other novelty is the ability to model the different production behavior at different fracture stages. Compared to numerical and analytic methods, this model can not only reduce extensive computing processing but also show high accuracy.
NASA Astrophysics Data System (ADS)
Wang, Y. B.; Zhu, X. W.; Dai, H. H.
2016-08-01
Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.
Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M
2016-05-01
Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.
Spin zero Hawking radiation for non-zero-angular momentum mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngampitipan, Tritos; Bonserm, Petarpa; Visser, Matt
2015-05-15
Black hole greybody factors carry some quantum black hole information. Studying greybody factors may lead to understanding the quantum nature of black holes. However, solving for exact greybody factors in many black hole systems is impossible. One way to deal with this problem is to place some rigorous analytic bounds on the greybody factors. In this paper, we calculate rigorous bounds on the greybody factors for spin zero hawking radiation for non-zero-angular momentum mode from the Kerr-Newman black holes.
On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness
NASA Astrophysics Data System (ADS)
Vashkov'yak, M. A.
2018-01-01
The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.
Scott, David J.; Winzor, Donald J.
2009-01-01
Abstract We have examined in detail analytical solutions of expressions for sedimentation equilibrium in the analytical ultracentrifuge to describe self-association under nonideal conditions. We find that those containing the radial dependence of total solute concentration that incorporate the Adams-Fujita assumption for composition-dependence of activity coefficients reveal potential shortcomings for characterizing such systems. Similar deficiencies are shown in the use of the NONLIN software incorporating the same assumption about the interrelationship between activity coefficients for monomer and polymer species. These difficulties can be overcome by iterative analyses incorporating expressions for the composition-dependence of activity coefficients predicted by excluded volume considerations. A recommendation is therefore made for the replacement of current software packages by programs that incorporate rigorous statistical-mechanical allowance for thermodynamic nonideality in sedimentation equilibrium distributions reflecting solute self-association. PMID:19651047
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jégou, C.; Maroutian, T.; Pillard, V.
We describe a vector network analyzer-based method to study the electromagnetic properties of nanoscale dielectrics at microwave frequencies (1 MHz–40 GHz). The complex permittivity spectrum of a given dielectric can be determined by placing it in a capacitor accessed on its both electrodes by coplanar waveguides. However, inherent propagation delays along the signal paths together with frequency-dependent effective surface of the capacitor at microwave frequencies can lead to significant distortion in the measured permittivity, which in turn can give rise to artificial frequency variations of the complex permittivity. We detail a fully analytical rigorous correction sequence with neither recourse tomore » extrinsic loss mechanisms nor to arbitrary parasitic signal paths. We illustrate our method on 3 emblematic dielectrics: ferroelectric morphotropic lead zirconate titanate, its paraelectric pyrochlore counterpart, and strontium titanate. Permittivity spectra taken at various points along the hysteresis loop help shedding light onto the nature of the different dielectric energy loss mechanisms. Thanks to the analytical character of our method, we can discuss routes to extend it to higher frequencies and we can identify unambiguously the sources of potential artifacts.« less
Wielogorska, Ewa; Chevallier, Olivier; Black, Connor; Galvin-King, Pamela; Delêtre, Marc; Kelleher, Colin T; Haughey, Simon A; Elliott, Christopher T
2018-01-15
Due to increasing number of food fraud incidents, there is an inherent need for the development and implementation of analytical platforms enabling detection and quantitation of adulteration. In this study a set of unique biomarkers of commonly found oregano adulterants became the targets in the development of a LC-MS/MS method which underwent a rigorous in-house validation. The method presented very high selectivity and specificity, excellent linearity (R 2 >0.988) low decision limits and detection capabilities (<2%), acceptable accuracy (intra-assay 92-113%, inter-assay 69-138%) and precision (CV<20%). The method was compared with an established FTIR screening assay and revealed a good correlation of quali- and quantitative results (R 2 >0.81). An assessment of 54 suspected adulterated oregano samples revealed that almost 90% of them contained at least one bulking agent, with a median level of adulteration of 50%. Such innovative methodologies need to be established as routine testing procedures to detect and ultimately deter food fraud. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analyzing qualitative data with computer software.
Weitzman, E A
1999-01-01
OBJECTIVE: To provide health services researchers with an overview of the qualitative data analysis process and the role of software within it; to provide a principled approach to choosing among software packages to support qualitative data analysis; to alert researchers to the potential benefits and limitations of such software; and to provide an overview of the developments to be expected in the field in the near future. DATA SOURCES, STUDY DESIGN, METHODS: This article does not include reports of empirical research. CONCLUSIONS: Software for qualitative data analysis can benefit the researcher in terms of speed, consistency, rigor, and access to analytic methods not available by hand. Software, however, is not a replacement for methodological training. PMID:10591282
Steady-state and dynamic models for particle engulfment during solidification
NASA Astrophysics Data System (ADS)
Tao, Yutao; Yeckel, Andrew; Derby, Jeffrey J.
2016-06-01
Steady-state and dynamic models are developed to study the physical mechanisms that determine the pushing or engulfment of a solid particle at a moving solid-liquid interface. The mathematical model formulation rigorously accounts for energy and momentum conservation, while faithfully representing the interfacial phenomena affecting solidification phase change and particle motion. A numerical solution approach is developed using the Galerkin finite element method and elliptic mesh generation in an arbitrary Lagrangian-Eulerian implementation, thus allowing for a rigorous representation of forces and dynamics previously inaccessible by approaches using analytical approximations. We demonstrate that this model accurately computes the solidification interface shape while simultaneously resolving thin fluid layers around the particle that arise from premelting during particle engulfment. We reinterpret the significance of premelting via the definition an unambiguous critical velocity for engulfment from steady-state analysis and bifurcation theory. We also explore the complicated transient behaviors that underlie the steady states of this system and posit the significance of dynamical behavior on engulfment events for many systems. We critically examine the onset of engulfment by comparing our computational predictions to those obtained using the analytical model of Rempel and Worster [29]. We assert that, while the accurate calculation of van der Waals repulsive forces remains an open issue, the computational model developed here provides a clear benefit over prior models for computing particle drag forces and other phenomena needed for the faithful simulation of particle engulfment.
A Rigorous Investigation on the Ground State of the Penson-Kolb Model
NASA Astrophysics Data System (ADS)
Yang, Kai-Hua; Tian, Guang-Shan; Han, Ru-Qi
2003-05-01
By using either numerical calculations or analytical methods, such as the bosonization technique, the ground state of the Penson-Kolb model has been previously studied by several groups. Some physicists argued that, as far as the existence of superconductivity in this model is concerned, it is canonically equivalent to the negative-U Hubbard model. However, others did not agree. In the present paper, we shall investigate this model by an independent and rigorous approach. We show that the ground state of the Penson-Kolb model is nondegenerate and has a nonvanishing overlap with the ground state of the negative-U Hubbard model. Furthermore, we also show that the ground states of both the models have the same good quantum numbers and may have superconducting long-range order at the same momentum q = 0. Our results support the equivalence between these models. The project partially supported by the Special Funds for Major State Basic Research Projects (G20000365) and National Natural Science Foundation of China under Grant No. 10174002
NASA Astrophysics Data System (ADS)
Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun
2017-11-01
Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.
Three-port beam splitter of a binary fused-silica grating.
Feng, Jijun; Zhou, Changhe; Wang, Bo; Zheng, Jiangjun; Jia, Wei; Cao, Hongchao; Lv, Peng
2008-12-10
A deep-etched polarization-independent binary fused-silica phase grating as a three-port beam splitter is designed and manufactured. The grating profile is optimized by use of the rigorous coupled-wave analysis around the 785 nm wavelength. The physical explanation of the grating is illustrated by the modal method. Simple analytical expressions of the diffraction efficiencies and modal guidelines for the three-port beam splitter grating design are given. Holographic recording technology and inductively coupled plasma etching are used to manufacture the fused-silica grating. Experimental results are in good agreement with the theoretical values.
Gentles, Stephen J; Charles, Cathy; Nicholas, David B; Ploeg, Jenny; McKibbon, K Ann
2016-10-11
Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews, might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research. The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type. We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.
Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
NASA Astrophysics Data System (ADS)
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
A consortium-driven framework to guide the implementation of ICH M7 Option 4 control strategies.
Barber, Chris; Antonucci, Vincent; Baumann, Jens-Christoph; Brown, Roland; Covey-Crump, Elizabeth; Elder, David; Elliott, Eric; Fennell, Jared W; Gallou, Fabrice; Ide, Nathan D; Jordine, Guido; Kallemeyn, Jeffrey M; Lauwers, Dirk; Looker, Adam R; Lovelle, Lucie E; McLaughlin, Mark; Molzahn, Robert; Ott, Martin; Schils, Didier; Oestrich, Rolf Schulte; Stevenson, Neil; Talavera, Pere; Teasdale, Andrew; Urquhart, Michael W; Varie, David L; Welch, Dennie
2017-11-01
The ICH M7 Option 4 control of (potentially) mutagenic impurities is based on the use of scientific principles in lieu of routine analytical testing. This approach can reduce the burden of analytical testing without compromising patient safety, provided a scientifically rigorous approach is taken which is backed up by sufficient theoretical and/or analytical data. This paper introduces a consortium-led initiative and offers a proposal on the supporting evidence that could be presented in regulatory submissions. Copyright © 2017 Elsevier Inc. All rights reserved.
Phyllis C. Adams; Glenn A. Christensen
2012-01-01
A rigorous quality assurance (QA) process assures that the data and information provided by the Forest Inventory and Analysis (FIA) program meet the highest possible standards of precision, completeness, representativeness, comparability, and accuracy. FIA relies on its analysts to check the final data quality prior to release of a Stateâs data to the national FIA...
Facilities Stewardship: Measuring the Return on Physical Assets.
ERIC Educational Resources Information Center
Kadamus, David A.
2001-01-01
Asserts that colleges and universities should apply the same analytical rigor to physical assets as they do financial assets. Presents a management tool, the Return on Physical Assets model, to help guide physical asset allocation decisions. (EV)
Rigorous analysis of thick microstrip antennas and wire antennas embedded in a substrate
NASA Astrophysics Data System (ADS)
Smolders, A. B.
1992-07-01
An efficient and rigorous method for the analysis of electrically thick rectangular microstrip antennas and wire antennas with a dielectric cover is presented. The method of moments is used in combination with the exact spectral domain Green's function in order to find the unknown currents on the antenna. The microstrip antenna is fed by a coaxial cable. A proper model of the feeding coaxial structure is used. In addition, a special attachment mode was applied to ensure continuity of current at the patch-coax transition. The efficiency of the method of moments is improved by using the so called source term extraction technique, where a great part of the infinite integrals involved with the method of moment formulation is calculated analytically. Computation time can be saved by selecting a set of basis functions that describes the current distribution on the patch and probe in an accurate way using only a few terms of this set. Thick microstrip antennas have broadband characteristics. However, a proper match to 50 Ohms is often difficult. This matching problem can be avoided by using a slightly different excitation structure. The patch is now electromagnetically coupled to the feeding probe. A bandwidth of more than 40 can easily be obtained for this type of microstrip antenna. The price to be paid is a degradation of the radiation characteristics.
NASA Astrophysics Data System (ADS)
Frizyuk, Kristina; Hasan, Mehedi; Krasnok, Alex; Alú, Andrea; Petrov, Mihail
2018-02-01
Resonantly enhanced Raman scattering in dielectric nanostructures has been recently proven to be an efficient tool for nanothermometry and for the experimental determination of their mode composition. In this paper we develop a rigorous analytical theory based on the Green's function approach to calculate the Raman emission from crystalline high-index dielectric nanoparticles. As an example, we consider silicon nanoparticles which have a strong Raman response due to active optical phonon modes. We relate enhancement of Raman signal emission to the Purcell effect due to the excitation of Mie modes inside the nanoparticles. We also employ our numerical approach to calculate inelastic Raman emission in more sophisticated geometries, which do not allow a straightforward analytical form of the Green's function. The Raman response from a silicon nanodisk has been analyzed with the proposed method, and the contribution of various Mie modes has been revealed.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.
Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.
Bifurcation and chaos analysis of a nonlinear electromechanical coupling relative rotation system
NASA Astrophysics Data System (ADS)
Liu, Shuang; Zhao, Shuang-Shuang; Sun, Bao-Ping; Zhang, Wen-Ming
2014-09-01
Hopf bifurcation and chaos of a nonlinear electromechanical coupling relative rotation system are studied in this paper. Considering the energy in air-gap field of AC motor, the dynamical equation of nonlinear electromechanical coupling relative rotation system is deduced by using the dissipation Lagrange equation. Choosing the electromagnetic stiffness as a bifurcation parameter, the necessary and sufficient conditions of Hopf bifurcation are given, and the bifurcation characteristics are studied. The mechanism and conditions of system parameters for chaotic motions are investigated rigorously based on the Silnikov method, and the homoclinic orbit is found by using the undetermined coefficient method. Therefore, Smale horseshoe chaos occurs when electromagnetic stiffness changes. Numerical simulations are also given, which confirm the analytical results.
Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques
NASA Astrophysics Data System (ADS)
Somov, Yevgeny
2014-12-01
Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov function method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.
Analytical transition-matrix treatment of electric multipole polarizabilities of hydrogen-like atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharchenko, V.F., E-mail: vkharchenko@bitp.kiev.ua
2015-04-15
The direct transition-matrix approach to the description of the electric polarization of the quantum bound system of particles is used to determine the electric multipole polarizabilities of the hydrogen-like atoms. It is shown that in the case of the bound system formed by the Coulomb interaction the corresponding inhomogeneous integral equation determining an off-shell scattering function, which consistently describes virtual multiple scattering, can be solved exactly analytically for all electric multipole polarizabilities. Our method allows to reproduce the known Dalgarno–Lewis formula for electric multipole polarizabilities of the hydrogen atom in the ground state and can also be applied to determinemore » the polarizability of the atom in excited bound states. - Highlights: • A new description for electric polarization of hydrogen-like atoms. • Expression for multipole polarizabilities in terms of off-shell scattering functions. • Derivation of integral equation determining the off-shell scattering function. • Rigorous analytic solving the integral equations both for ground and excited states. • Study of contributions of virtual multiple scattering to electric polarizabilities.« less
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Exploring Student Perceptions of Rigor Online: Toward a Definition of Rigorous Learning
ERIC Educational Resources Information Center
Duncan, Heather E.; Range, Bret; Hvidston, David
2013-01-01
Technological advances in the last decade have impacted delivery methods of university courses. More and more courses are offered in a variety of formats. While academic rigor is a term often used, its definition is less clear. This mixed-methods study explored graduate student conceptions of rigor in the online learning environment embedded…
Silvestre, Dolores; Fraga, Miriam; Gormaz, María; Torres, Ester; Vento, Máximo
2014-07-01
The variability of human milk (HM) composition renders analysis of its components essential for optimal nutrition of preterm fed either with donor's or own mother's milk. To fulfil this requirement, various analytical instruments have been subjected to scientific and clinical evaluation. The objective of this study was to evaluate the suitability of a rapid method for the analysis of macronutrients in HM as compared with the analytical methods applied by cow's milk industry. Mature milk from 39 donors was analysed using an infrared human milk analyser (HMA) and compared with biochemical reference laboratory methods. The statistical analysis was based on the use of paired data tests. The use of an infrared HMA for the analysis of lipids, proteins and lactose in HM proved satisfactory as regards the rapidity, simplicity and the required sample volume. The instrument afforded good linearity and precision in application to all three nutrients. However, accuracy was not acceptable when compared with the reference methods, with overestimation of the lipid content and underestimation of the amount of proteins and lactose contents. The use of mid-infrared HMA might become the standard for rapid analysis of HM once standardisation and rigorous and systematic calibration is provided. © 2012 John Wiley & Sons Ltd.
Analysis of Well-Clear Boundary Models for the Integration of UAS in the NAS
NASA Technical Reports Server (NTRS)
Upchurch, Jason M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Chamberlain, James P.; Consiglio, Maria C.
2014-01-01
The FAA-sponsored Sense and Avoid Workshop for Unmanned Aircraft Systems (UAS) defnes the concept of sense and avoid for remote pilots as "the capability of a UAS to remain well clear from and avoid collisions with other airborne traffic." Hence, a rigorous definition of well clear is fundamental to any separation assurance concept for the integration of UAS into civil airspace. This paper presents a family of well-clear boundary models based on the TCAS II Resolution Advisory logic. Analytical techniques are used to study the properties and relationships satisfied by the models. Some of these properties are numerically quantifed using statistical methods.
Hyperfine state entanglement of spinor BEC and scattering atom
NASA Astrophysics Data System (ADS)
Li, Zhibing; Bao, Chengguang; Zheng, Wei
2018-05-01
Condensate of spin-1 atoms frozen in a unique spatial mode may possess large internal degrees of freedom. The scattering amplitudes of polarized cold atoms scattered by the condensate are obtained with the method of fractional parentage coefficients that treats the spin degrees of freedom rigorously. Channels with scattering cross sections enhanced by the square of the atom number of the condensate are found. Entanglement between the condensate and the propagating atom can be established by scattering. Entanglement entropy is analytically obtained for arbitrary initial states. Our results also give a hint for the establishment of quantum thermal ensembles in the hyperfine space of spin states.
Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method
NASA Astrophysics Data System (ADS)
De Waal, Sybrand A.
1996-07-01
A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.
Computation of Quasiperiodic Normally Hyperbolic Invariant Tori: Rigorous Results
NASA Astrophysics Data System (ADS)
Canadell, Marta; Haro, Àlex
2017-12-01
The development of efficient methods for detecting quasiperiodic oscillations and computing the corresponding invariant tori is a subject of great importance in dynamical systems and their applications in science and engineering. In this paper, we prove the convergence of a new Newton-like method for computing quasiperiodic normally hyperbolic invariant tori carrying quasiperiodic motion in smooth families of real-analytic dynamical systems. The main result is stated as an a posteriori KAM-like theorem that allows controlling the inner dynamics on the torus with appropriate detuning parameters, in order to obtain a prescribed quasiperiodic motion. The Newton-like method leads to several fast and efficient computational algorithms, which are discussed and tested in a companion paper (Canadell and Haro in J Nonlinear Sci, 2017. doi: 10.1007/s00332-017-9388-z), in which new mechanisms of breakdown are presented.
Zhang, M.; Takahashi, M.; Morin, R.H.; Esaki, T.
1998-01-01
A theoretical analysis is presented that compares the response characteristics of the constant head and the constant flowrate (flow pump) laboratory techniques for quantifying the hydraulic properties of geologic materials having permeabilities less than 10-10 m/s. Rigorous analytical solutions that describe the transient distributions of hydraulic gradient within a specimen are developed, and equations are derived for each method. Expressions simulating the inflow and outflow rates across the specimen boundaries during a constant-head permeability test are also presented. These solutions illustrate the advantages and disadvantages of each method, including insights into measurement accuracy and the validity of using Darcy's law under certain conditions. The resulting observations offer practical considerations in the selection of an appropriate laboratory test method for the reliable measurement of permeability in low-permeability geologic materials.
Transportation and the economy national and state perspectives
DOT National Transportation Integrated Search
1998-05-01
In the past months, many years of research and data collection have begun paying off in a rich series of analytical studies paving the way for a strong, rigorous and quantitative explanation of transportation's role in the economy and the power of tr...
NASA Astrophysics Data System (ADS)
Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe
2017-08-01
A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.
A Renormalisation Group Method. V. A Single Renormalisation Group Step
NASA Astrophysics Data System (ADS)
Brydges, David C.; Slade, Gordon
2015-05-01
This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.
Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somov, Yevgeny, E-mail: e-somov@mail.ru
Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov functionmore » method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.« less
Studies of ectomycorrhizal community structure have used a variety of analytical regimens including sole or partial reliance on gross morphological characterization of colonized root tips. Depending on the rigor of the classification protocol, this technique can incorrectly assig...
Anticipatory Understanding of Adversary Intent: A Signature-Based Knowledge System
2009-06-01
concept of logical positivism has been applied more recently to all human knowledge and reflected in current data fusion research, information mining...this work has been successfully translated into useful analytical tools that can provide a rigorous and quantitative basis for predictive analysis
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Numerical proof of stability of roll waves in the small-amplitude limit for inclined thin film flow
NASA Astrophysics Data System (ADS)
Barker, Blake
2014-10-01
We present a rigorous numerical proof based on interval arithmetic computations categorizing the linearized and nonlinear stability of periodic viscous roll waves of the KdV-KS equation modeling weakly unstable flow of a thin fluid film on an incline in the small-amplitude KdV limit. The argument proceeds by verification of a stability condition derived by Bar-Nepomnyashchy and Johnson-Noble-Rodrigues-Zumbrun involving inner products of various elliptic functions arising through the KdV equation. One key point in the analysis is a bootstrap argument balancing the extremely poor sup norm bounds for these functions against the extremely good convergence properties for analytic interpolation in order to obtain a feasible computation time. Another is the way of handling analytic interpolation in several variables by a two-step process carving up the parameter space into manageable pieces for rigorous evaluation. These and other general aspects of the analysis should serve as blueprints for more general analyses of spectral stability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, Shubhendu; Sensale-Rodriguez, Berardi; Xing, Huili Grace
A rigorous theoretical and computational model is developed for the plasma-wave propagation in high electron mobility transistor structures with electron injection from a resonant tunneling diode at the gate. We discuss the conditions in which low-loss and sustainable plasmon modes can be supported in such structures. The developed analytical model is used to derive the dispersion relation for these plasmon-modes. A non-linear full-wave-hydrodynamic numerical solver is also developed using a finite difference time domain algorithm. The developed analytical solutions are validated via the numerical solution. We also verify previous observations that were based on a simplified transmission line model. Itmore » is shown that at high levels of negative differential conductance, plasmon amplification is indeed possible. The proposed rigorous models can enable accurate design and optimization of practical resonant tunnel diode-based plasma-wave devices for terahertz sources, mixers, and detectors, by allowing a precise representation of their coupling when integrated with other electromagnetic structures.« less
NASA Technical Reports Server (NTRS)
Fantano, Louis
2015-01-01
Thermal and Fluids Analysis Workshop Silver Spring, MD NCTS 21070-15 The Landsat 8 Data Continuity Mission, which is part of the United States Geologic Survey (USGS), launched February 11, 2013. A Landsat environmental test requirement mandated that test conditions bound worst-case flight thermal environments. This paper describes a rigorous analytical methodology applied to assess refine proposed thermal vacuum test conditions and the issues encountered attempting to satisfy this requirement.
Bending of an Infinite beam on a base with two parameters in the absence of a part of the base
NASA Astrophysics Data System (ADS)
Aleksandrovskiy, Maxim; Zaharova, Lidiya
2018-03-01
Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.
Studies on the estimation of the postmortem interval. 3. Rigor mortis (author's transl).
Suzutani, T; Ishibashi, H; Takatori, T
1978-11-01
The authors have devised a method for classifying rigor mortis into 10 types based on its appearance and strength in various parts of a cadaver. By applying the method to the findings of 436 cadavers which were subjected to medico-legal autopsies in our laboratory during the last 10 years, it has been demonstrated that the classifying method is effective for analyzing the phenomenon of onset, persistence and disappearance of rigor mortis statistically. The investigation of the relationship between each type of rigor mortis and the postmortem interval has demonstrated that rigor mortis may be utilized as a basis for estimating the postmortem interval but the values have greater deviation than those described in current textbooks.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Accessing Social Capital through the Academic Mentoring Process
ERIC Educational Resources Information Center
Smith, Buffy
2007-01-01
This article explores how mentors and mentees create and maintain social capital during the mentoring process. I employ a sociological conceptual framework and rigorous qualitative analytical techniques to examine how students of color and first-generation college students access social capital through mentoring relationships. The findings…
Inferring subunit stoichiometry from single molecule photobleaching
2013-01-01
Single molecule photobleaching is a powerful tool for determining the stoichiometry of protein complexes. By attaching fluorophores to proteins of interest, the number of associated subunits in a complex can be deduced by imaging single molecules and counting fluorophore photobleaching steps. Because some bleaching steps might be unobserved, the ensemble of steps will be binomially distributed. In this work, it is shown that inferring the true composition of a complex from such data is nontrivial because binomially distributed observations present an ill-posed inference problem. That is, a unique and optimal estimate of the relevant parameters cannot be extracted from the observations. Because of this, a method has not been firmly established to quantify confidence when using this technique. This paper presents a general inference model for interpreting such data and provides methods for accurately estimating parameter confidence. The formalization and methods presented here provide a rigorous analytical basis for this pervasive experimental tool. PMID:23712552
NASA Occupant Protection Standards Development
NASA Technical Reports Server (NTRS)
Somers, Jeffrey; Gernhardt, Michael; Lawrence, Charles
2012-01-01
Historically, spacecraft landing systems have been tested with human volunteers, because analytical methods for estimating injury risk were insufficient. These tests were conducted with flight-like suits and seats to verify the safety of the landing systems. Currently, NASA uses the Brinkley Dynamic Response Index to estimate injury risk, although applying it to the NASA environment has drawbacks: (1) Does not indicate severity or anatomical location of injury (2) Unclear if model applies to NASA applications. Because of these limitations, a new validated, analytical approach was desired. Leveraging off of the current state of the art in automotive safety and racing, a new approach was developed. The approach has several aspects: (1) Define the acceptable level of injury risk by injury severity (2) Determine the appropriate human surrogate for testing and modeling (3) Mine existing human injury data to determine appropriate Injury Assessment Reference Values (IARV). (4) Rigorously Validate the IARVs with sub-injurious human testing (5) Use validated IARVs to update standards and vehicle requirement
Enantiospecific Detection of Chiral Nanosamples Using Photoinduced Force
NASA Astrophysics Data System (ADS)
Kamandi, Mohammad; Albooyeh, Mohammad; Guclu, Caner; Veysi, Mehdi; Zeng, Jinwei; Wickramasinghe, Kumar; Capolino, Filippo
2017-12-01
We propose a high-resolution microscopy technique for enantiospecific detection of chiral samples down to sub-100-nm size based on force measurement. We delve into the differential photoinduced optical force Δ F exerted on an achiral probe in the vicinity of a chiral sample when left and right circularly polarized beams separately excite the sample-probe interactive system. We analytically prove that Δ F is entangled with the enantiomer type of the sample enabling enantiospecific detection of chiral inclusions. Moreover, we demonstrate that Δ F is linearly dependent on both the chiral response of the sample and the electric response of the tip and is inversely related to the quartic power of probe-sample distance. We provide physical insight into the transfer of optical activity from the chiral sample to the achiral tip based on a rigorous analytical approach. We support our theoretical achievements by several numerical examples highlighting the potential application of the derived analytic properties. Lastly, we demonstrate the sensitivity of our method to enantiospecify nanoscale chiral samples with chirality parameter on the order of 0.01 and discuss how the sensitivity of our proposed technique can be further improved.
López-Guerra, Enrique A
2017-01-01
We explore the contact problem of a flat-end indenter penetrating intermittently a generalized viscoelastic surface, containing multiple characteristic times. This problem is especially relevant for nanoprobing of viscoelastic surfaces with the highly popular tapping-mode AFM imaging technique. By focusing on the material perspective and employing a rigorous rheological approach, we deliver analytical closed-form solutions that provide physical insight into the viscoelastic sources of repulsive forces, tip–sample dissipation and virial of the interaction. We also offer a systematic comparison to the well-established standard harmonic excitation, which is the case relevant for dynamic mechanical analysis (DMA) and for AFM techniques where tip–sample sinusoidal interaction is permanent. This comparison highlights the substantial complexity added by the intermittent-contact nature of the interaction, which precludes the derivation of straightforward equations as is the case for the well-known harmonic excitations. The derivations offered have been thoroughly validated through numerical simulations. Despite the complexities inherent to the intermittent-contact nature of the technique, the analytical findings highlight the potential feasibility of extracting meaningful viscoelastic properties with this imaging method. PMID:29114450
Direct computation of orbital sunrise or sunset event parameters
NASA Technical Reports Server (NTRS)
Buglia, J. J.
1986-01-01
An analytical method is developed for determining the geometrical parameters which are needed to describe the viewing angles of the Sun relative to an orbiting spacecraft when the Sun rises or sets with respect to the spacecraft. These equations are rigorous and are frequently used for parametric studies relative to mission planning and for determining instrument parameters. The text is wholly self-contained in that no external reference to ephemerides or other astronomical tables is needed. Equations are presented which allow the computation of Greenwich sidereal time and right ascension and declination of the Sun generally to within a few seconds of arc, or a few tenths of a second in time.
Reproducible analyses of microbial food for advanced life support systems
NASA Technical Reports Server (NTRS)
Petersen, Gene R.
1988-01-01
The use of yeasts in controlled ecological life support systems (CELSS) for microbial food regeneration in space required the accurate and reproducible analysis of intracellular carbohydrate and protein levels. The reproducible analysis of glycogen was a key element in estimating overall content of edibles in candidate yeast strains. Typical analytical methods for estimating glycogen in Saccharomyces were not found to be entirely aplicable to other candidate strains. Rigorous cell lysis coupled with acid/base fractionation followed by specific enzymatic glycogen analyses were required to obtain accurate results in two strains of Candida. A profile of edible fractions of these strains was then determined. The suitability of yeasts as food sources in CELSS food production processes is discussed.
75 FR 71131 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... impacts. To complete this task with scientific rigor, it will be necessary to collect high quality survey... instruments, methodologies, procedures, and analytical techniques for this task. Moreover, they have been pilot tested in 11 States. The tools and techniques were submitted for review, and were approved, by...
The Temporal Organization of Syllabic Structure
ERIC Educational Resources Information Center
Shaw, Jason A.
2010-01-01
This dissertation develops analytical tools which enable rigorous evaluation of competing syllabic parses on the basis of temporal patterns in speech production data. The data come from the articulographic tracking of fleshpoints on target speech organs, e.g., tongue, lips, jaw, in experiments with native speakers of American English and Moroccan…
Toddi A. Steelman; Branda Nowell; Deena Bayoumi; Sarah McCaffrey
2014-01-01
We leverage economic theory, network theory, and social network analytical techniques to bring greater conceptual and methodological rigor to understand how information is exchanged during disasters. We ask, "How can information relationships be evaluated more systematically during a disaster response?" "Infocentric analysis"a term and...
Cash on Demand: A Framework for Managing a Cash Liquidity Position.
ERIC Educational Resources Information Center
Augustine, John H.
1995-01-01
A well-run college or university will seek to accumulate and maintain an appropriate cash reserve or liquidity position. A rigorous analytic process for estimating the size and cost of a liquidity position, based on judgments about the institution's operating risks and opportunities, is outlined. (MSE)
NASA Astrophysics Data System (ADS)
Ganapathy, Vinay; Ramachandran, Ramesh
2017-10-01
The response of a quadrupolar nucleus (nuclear spin with I > 1/2) to an oscillating radio-frequency pulse/field is delicately dependent on the ratio of the quadrupolar coupling constant to the amplitude of the pulse in addition to its duration and oscillating frequency. Consequently, analytic description of the excitation process in the density operator formalism has remained less transparent within existing theoretical frameworks. As an alternative, the utility of the "concept of effective Floquet Hamiltonians" is explored in the present study to explicate the nuances of the excitation process in multilevel systems. Employing spin I = 3/2 as a case study, a unified theoretical framework for describing the excitation of multiple-quantum transitions in static isotropic and anisotropic solids is proposed within the framework of perturbation theory. The challenges resulting from the anisotropic nature of the quadrupolar interactions are addressed within the effective Hamiltonian framework. The possible role of the various interaction frames on the convergence of the perturbation corrections is discussed along with a proposal for a "hybrid method" for describing the excitation process in anisotropic solids. Employing suitable model systems, the validity of the proposed hybrid method is substantiated through a rigorous comparison between simulations emerging from exact numerical and analytic methods.
NASA Astrophysics Data System (ADS)
Mazilu, Irina; Gonzalez, Joshua
2008-03-01
From the point of view of a physicist, a bio-molecular motor represents an interesting non-equilibrium system and it is directly amenable to an analysis using standard methods of non-equilibrium statistical physics. We conduct a rigorous Monte Carlo study of three different driven lattice gas models that retain the basic behavior of three types of cytoskeletal molecular motors. Our models incorporate novel features such as realistic dynamics rules and complex motor-motor interactions. We are interested to have a deeper understanding of how various parameters influence the macroscopic behavior of these systems, what is the density profile and if the system undergoes a phase transition. On the analytical front, we computed the steady-state probability distributions exactly for the one of the models using the matrix method that was established in 1993 by B. Derrida et al. We also explored the possibilities offered by the ``Bethe ansatz'' method by mapping some well studied spin models into asymmetric simple exclusion models (already analyzed using computer simulations), and to use the results obtained for the spin models in finding an exact solution for our problem. We have exhaustive computational studies of the kinesin and dynein molecular motor models that prove to be very useful in checking our analytical work.
The use of analytical sedimentation velocity to extract thermodynamic linkage.
Cole, James L; Correia, John J; Stafford, Walter F
2011-11-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. Copyright © 2011 Elsevier B.V. All rights reserved.
The use of analytical sedimentation velocity to extract thermodynamic linkage
Cole, James L.; Correia, John J.; Stafford, Walter F.
2011-01-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980’s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. PMID:21703752
NASA Technical Reports Server (NTRS)
Brown, James L.; Naughton, Jonathan W.
1999-01-01
A thin film of oil on a surface responds primarily to the wall shear stress generated on that surface by a three-dimensional flow. The oil film is also subject to wall pressure gradients, surface tension effects and gravity. The partial differential equation governing the oil film flow is shown to be related to Burgers' equation. Analytical and numerical methods for solving the thin oil film equation are presented. A direct numerical solver is developed where the wall shear stress variation on the surface is known and which solves for the oil film thickness spatial and time variation on the surface. An inverse numerical solver is also developed where the oil film thickness spatial variation over the surface at two discrete times is known and which solves for the wall shear stress variation over the test surface. A One-Time-Level inverse solver is also demonstrated. The inverse numerical solver provides a mathematically rigorous basis for an improved form of a wall shear stress instrument suitable for application to complex three-dimensional flows. To demonstrate the complexity of flows for which these oil film methods are now suitable, extensive examination is accomplished for these analytical and numerical methods as applied to a thin oil film in the vicinity of a three-dimensional saddle of separation.
Protein Multiplexed Immunoassay Analysis with R.
Breen, Edmond J
2017-01-01
Plasma samples from 177 control and type 2 diabetes patients collected at three Australian hospitals are screened for 14 analytes using six custom-made multiplex kits across 60 96-well plates. In total 354 samples were collected from the patients, representing one baseline and one end point sample from each patient. R methods and source code for analyzing the analyte fluorescence response obtained from these samples by Luminex Bio-Plex ® xMap multiplexed immunoassay technology are disclosed. Techniques and R procedures for reading Bio-Plex ® result files for statistical analysis and data visualization are also presented. The need for technical replicates and the number of technical replicates are addressed as well as plate layout design strategies. Multinomial regression is used to determine plate to sample covariate balance. Methods for matching clinical covariate information to Bio-Plex ® results and vice versa are given. As well as methods for measuring and inspecting the quality of the fluorescence responses are presented. Both fixed and mixed-effect approaches for immunoassay statistical differential analysis are presented and discussed. A random effect approach to outlier analysis and detection is also shown. The bioinformatics R methodology present here provides a foundation for rigorous and reproducible analysis of the fluorescence response obtained from multiplexed immunoassays.
Physical-geometric optics method for large size faceted particles.
Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong
2017-10-02
A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.
Statistical Approaches to Assess Biosimilarity from Analytical Data.
Burdick, Richard; Coffey, Todd; Gutka, Hiten; Gratzl, Gyöngyi; Conlon, Hugh D; Huang, Chi-Ting; Boyne, Michael; Kuehne, Henriette
2017-01-01
Protein therapeutics have unique critical quality attributes (CQAs) that define their purity, potency, and safety. The analytical methods used to assess CQAs must be able to distinguish clinically meaningful differences in comparator products, and the most important CQAs should be evaluated with the most statistical rigor. High-risk CQA measurements assess the most important attributes that directly impact the clinical mechanism of action or have known implications for safety, while the moderate- to low-risk characteristics may have a lower direct impact and thereby may have a broader range to establish similarity. Statistical equivalence testing is applied for high-risk CQA measurements to establish the degree of similarity (e.g., highly similar fingerprint, highly similar, or similar) of selected attributes. Notably, some high-risk CQAs (e.g., primary sequence or disulfide bonding) are qualitative (e.g., the same as the originator or not the same) and therefore not amenable to equivalence testing. For biosimilars, an important step is the acquisition of a sufficient number of unique originator drug product lots to measure the variability in the originator drug manufacturing process and provide sufficient statistical power for the analytical data comparisons. Together, these analytical evaluations, along with PK/PD and safety data (immunogenicity), provide the data necessary to determine if the totality of the evidence warrants a designation of biosimilarity and subsequent licensure for marketing in the USA. In this paper, a case study approach is used to provide examples of analytical similarity exercises and the appropriateness of statistical approaches for the example data.
Napiorkowski, Maciej; Urbanczyk, Waclaw
2018-04-30
We show that in twisted microstructured optical fibers (MOFs) the coupling between the core and cladding modes can be obtained for helix pitch much greater than previously considered. We provide an analytical model describing scaling properties of the twisted MOFs, which relates coupling conditions to dimensionless ratios between the wavelength, the lattice pitch and the helix pitch of the twisted fiber. Furthermore, we verify our model using a rigorous numerical method based on the transformation optics formalism and study its limitations. The obtained results show that for appropriately designed twisted MOFs, distinct, high loss resonance peaks can be obtained in a broad wavelength range already for the fiber with 9 mm helix pitch, thus allowing for fabrication of coupling based devices using a less demanding method involving preform spinning.
Stress concentration in a cylindrical shell containing a circular hole.
NASA Technical Reports Server (NTRS)
Adams, N. J. I.
1971-01-01
The state of stress in a cylindrical shell containing a circular cutout was determined for axial tension, torsion, and internal pressure loading. The solution was obtained for the shallow shell equations by a variational method. The results were expressed in terms of a nondimensional curvature parameter which was a function of shell radius, shell thickness, and hole radius. The function chosen for the solution was such that when the radius of the cylindrical shell approaches infinity, the flat-plate solution was obtained. The results are compared with solutions obtained by more rigorous analytical methods, and with some experimental results. For small values of the curvature parameter, the agreement is good. For higher values of the curvature parameter, the present solutions indicate a limiting value of stress concentration, which is in contrast to previous results.
Berger, Lawrence M; Bruch, Sarah K; Johnson, Elizabeth I; James, Sigrid; Rubin, David
2009-01-01
This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized change, simple change, difference-in-difference, and fixed effects models. Models were estimated using the full sample and a matched sample generated by propensity scoring. Although results from the unmatched OLS and residualized change models suggested that out-of-home placement is associated with increased child behavior problems, estimates from models that more rigorously adjust for selection bias indicated that placement has little effect on children's cognitive skills or behavior problems.
On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.
Yang, Harry; Novick, Steven; Burdick, Richard K
Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells
Trimper, John B.; Trettel, Sean G.; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal “place cells,” neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These “replay” events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay. PMID:28824388
NASA Astrophysics Data System (ADS)
Wu, Haiqing; Bai, Bing; Li, Xiaochun
2018-02-01
Existing analytical or approximate solutions that are appropriate for describing the migration mechanics of CO2 and the evolution of fluid pressure in reservoirs do not consider the high compressibility of CO2, which reduces their calculation accuracy and application value. Therefore, this work first derives a new governing equation that represents the movement of complex fluids in reservoirs, based on the equation of continuity and the generalized Darcy's law. A more rigorous definition of the coefficient of compressibility of fluid is then presented, and a power function model (PFM) that characterizes the relationship between the physical properties of CO2 and the pressure is derived. Meanwhile, to avoid the difficulty of determining the saturation of fluids, a method that directly assumes the average relative permeability of each fluid phase in different fluid domains is proposed, based on the theory of gradual change. An advanced analytical solution is obtained that includes both the partial miscibility and the compressibility of CO2 and brine in evaluating the evolution of fluid pressure by integrating within different regions. Finally, two typical sample analyses are used to verify the reliability, improved nature and universality of this new analytical solution. Based on the physical characteristics and the results calculated for the examples, this work elaborates the concept and basis of partitioning for use in further work.
Validation of a 30-year-old process for the manufacture of L-asparaginase from Erwinia chrysanthemi.
Gervais, David; Allison, Nigel; Jennings, Alan; Jones, Shane; Marks, Trevor
2013-04-01
A 30-year-old manufacturing process for the biologic product L-asparaginase from the plant pathogen Erwinia chrysanthemi was rigorously qualified and validated, with a high level of agreement between validation data and the 6-year process database. L-Asparaginase exists in its native state as a tetrameric protein and is used as a chemotherapeutic agent in the treatment regimen for Acute Lymphoblastic Leukaemia (ALL). The manufacturing process involves fermentation of the production organism, extraction and purification of the L-asparaginase to make drug substance (DS), and finally formulation and lyophilisation to generate drug product (DP). The extensive manufacturing experience with the product was used to establish ranges for all process parameters and product quality attributes. The product and in-process intermediates were rigorously characterised, and new assays, such as size-exclusion and reversed-phase UPLC, were developed, validated, and used to analyse several pre-validation batches. Finally, three prospective process validation batches were manufactured and product quality data generated using both the existing and the new analytical methods. These data demonstrated the process to be robust, highly reproducible and consistent, and the validation was successful, contributing to the granting of an FDA product license in November, 2011.
Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Dalton, Angela C.; Dale, Crystal
2014-06-01
Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less
Evaluation of holonomic quantum computation: adiabatic versus nonadiabatic.
Cen, LiXiang; Li, XinQi; Yan, YiJing; Zheng, HouZhi; Wang, ShunJin
2003-04-11
Based on the analytical solution to the time-dependent Schrödinger equations, we evaluate the holonomic quantum computation beyond the adiabatic limit. Besides providing rigorous confirmation of the geometrical prediction of holonomies, the present dynamical resolution offers also a practical means to study the nonadiabaticity induced effects for the universal qubit operations.
The Personal Selling Ethics Scale: Revisions and Expansions for Teaching Sales Ethics
ERIC Educational Resources Information Center
Donoho, Casey; Heinze, Timothy
2011-01-01
The field of sales draws a large number of marketing graduates. Sales curricula used within today's marketing programs should include rigorous discussions of sales ethics. The Personal Selling Ethics Scale (PSE) provides an analytical tool for assessing and discussing students' ethical sales sensitivities. However, since the scale fails to address…
ERIC Educational Resources Information Center
Dombrowski, Stefan C.; Watkins, Marley W.; Brogan, Michael J.
2009-01-01
This study investigated the factor structure of the Reynolds Intellectual Assessment Scales (RIAS) using rigorous exploratory factor analytic and factor extraction procedures. The results of this study indicate that the RIAS is a single factor test. Despite these results, higher order factor analysis using the Schmid-Leiman procedure indicates…
Quantum Corrections in Nanoplasmonics: Shape, Scale, and Material
NASA Astrophysics Data System (ADS)
Christensen, Thomas; Yan, Wei; Jauho, Antti-Pekka; Soljačić, Marin; Mortensen, N. Asger
2017-04-01
The classical treatment of plasmonics is insufficient at the nanometer-scale due to quantum mechanical surface phenomena. Here, an extension of the classical paradigm is reported which rigorously remedies this deficiency through the incorporation of first-principles surface response functions—the Feibelman d parameters—in general geometries. Several analytical results for the leading-order plasmonic quantum corrections are obtained in a first-principles setting; particularly, a clear separation of the roles of shape, scale, and material is established. The utility of the formalism is illustrated by the derivation of a modified sum rule for complementary structures, a rigorous reformulation of Kreibig's phenomenological damping prescription, and an account of the small-scale resonance shifting of simple and noble metal nanostructures.
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.; Leonov, G. A.; Yuldashev, M. V.; Yuldashev, R. V.
2017-10-01
During recent years it has been shown that hidden oscillations, whose basin of attraction does not overlap with small neighborhoods of equilibria, may significantly complicate simulation of dynamical models, lead to unreliable results and wrong conclusions, and cause serious damage in drilling systems, aircrafts control systems, electromechanical systems, and other applications. This article provides a survey of various phase-locked loop based circuits (used in satellite navigation systems, optical, and digital communication), where such difficulties take place in MATLAB and SPICE. Considered examples can be used for testing other phase-locked loop based circuits and simulation tools, and motivate the development and application of rigorous analytical methods for the global analysis of phase-locked loop based circuits.
Technique for Predicting the RF Field Strength Inside an Enclosure
NASA Technical Reports Server (NTRS)
Hallett, M.; Reddell, J.
1998-01-01
This Memorandum presents a simple analytical technique for predicting the RF electric field strength inside an enclosed volume in which radio frequency radiation occurs. The technique was developed to predict the radio frequency (RF) field strength within a launch vehicle's fairing from payloads launched with their telemetry transmitters radiating and to the impact of the radiation on the vehicle and payload. The RF field strength is shown to be a function of the surface materials and surface areas. The method accounts for RF energy losses within exposed surfaces, through RF windows, and within multiple layers of dielectric materials which may cover the surfaces. This Memorandum includes the rigorous derivation of all equations and presents examples and data to support the validity of the technique.
Family Meals and Child Academic and Behavioral Outcomes
Miller, Daniel P.; Waldfogel, Jane; Han, Wen-Jui
2012-01-01
This study investigates the link between the frequency of family breakfasts and dinners and child academic and behavioral outcomes in a panel sample of 21,400 children aged 5–15. It complements previous work by examining younger and older children separately and by using information on a large number of controls and rigorous analytic methods to discern whether there is causal relation between family meal frequency (FMF) and child outcomes. In child fixed effects models, which controlled for unchanging aspects of children and their families, there were no significant (p<.05) relations between FMF and either academic or behavioral outcomes, a novel finding. These results were robust to various specifications of the FMF variables and did not differ by child age. PMID:22880815
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhaoyuan Liu; Kord Smith; Benoit Forget
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices.more » Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.« less
Preserving pre-rigor meat functionality for beef patty production.
Claus, J R; Sørheim, O
2006-06-01
Three methods were examined for preserving pre-rigor meat functionality in beef patties. Hot-boned semimembranosus muscles were processed as follows: (1) pre-rigor ground, salted, patties immediately cooked; (2) pre-rigor ground, salted and stored overnight; (3) pre-rigor injected with brine; and (4) post-rigor ground and salted. Raw patties contained 60% lean beef, 19.7% beef fat trim, 1.7% NaCl, 3.6% starch, and 15% water. Pre-rigor processing occurred at 3-3.5h postmortem. Patties made from pre-rigor ground meat had higher pH values; greater protein solubility; firmer, more cohesive, and chewier texture; and substantially lower cooking losses than the other treatments. Addition of salt was sufficient to reduce the rate and extent of glycolysis. Brine injection of intact pre-rigor muscles resulted in some preservation of the functional properties but not as pronounced as with salt addition to pre-rigor ground meat.
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
Quantitative Analysis of Fullerene Nanomaterials in Environmental Systems: A Critical Review
Isaacson, Carl W.; Kleber, Markus; Field, Jennifer A.
2009-01-01
The increasing production and use of fullerene nanomaterials has led to calls for more information regarding the potential impacts that releases of these materials may have on human and environmental health. Fullerene nanomaterials, which are comprised of both fullerenes and surface-functionalized fullerenes, are used in electronic, optic, medical and cosmetic applications. Measuring fullerene nanomaterial concentrations in natural environments is difficult because they exhibit a duality of physical and chemical characteristics as they transition from hydrophobic to polar forms upon exposure to water. In aqueous environments, this is expressed as their tendency to initially (i) self assemble into aggregates of appreciable size and hydrophobicity, and subsequently (ii) interact with the surrounding water molecules and other chemical constituents in natural environments thereby acquiring negative surface charge. Fullerene nanomaterials may therefore deceive the application of any single analytical method that is applied with the assumption that fullerenes have but one defining characteristic (e.g., hydrophobicity). [1] We find that analytical procedures are needed to account for the potentially transitory nature of fullerenes in natural environments through the use of approaches that provide chemically-explicit information including molecular weight and the number and identity of surface functional groups. [2] We suggest that sensitive and mass-selective detection, such as that offered by mass spectrometry when combined with optimized extraction procedures, offers the greatest potential to achieve this goal. [3] With this review, we show that significant improvements in analytical rigor would result from an increased availability of well characterized authentic standards, reference materials, and isotopically-labeled internal standards. Finally, the benefits of quantitative and validated analytical methods for advancing the knowledge on fullerene occurrence, fate, and behavior are indicated. PMID:19764203
Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.
Hele, Timothy J H; Ananth, Nandini
2016-12-22
We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.
Embedding dynamical networks into distributed models
NASA Astrophysics Data System (ADS)
Innocenti, Giacomo; Paoletti, Paolo
2015-07-01
Large networks of interacting dynamical systems are well-known for the complex behaviours they are able to display, even when each node features a quite simple dynamics. Despite examples of such networks being widespread both in nature and in technological applications, the interplay between the local and the macroscopic behaviour, through the interconnection topology, is still not completely understood. Moreover, traditional analytical methods for dynamical response analysis fail because of the intrinsically large dimension of the phase space of the network which makes the general problem intractable. Therefore, in this paper we develop an approach aiming to condense all the information in a compact description based on partial differential equations. By focusing on propagative phenomena, rigorous conditions under which the original network dynamical properties can be successfully analysed within the proposed framework are derived as well. A network of Fitzhugh-Nagumo systems is finally used to illustrate the effectiveness of the proposed method.
Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data
NASA Astrophysics Data System (ADS)
Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.
2018-01-01
Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.
Tenderness of pre- and post rigor lamb longissimus muscle.
Geesink, Geert; Sujang, Sadi; Koohmaraie, Mohammad
2011-08-01
Lamb longissimus muscle (n=6) sections were cooked at different times post mortem (prerigor, at rigor, 1dayp.m., and 7 days p.m.) using two cooking methods. Using a boiling waterbath, samples were either cooked to a core temperature of 70 °C or boiled for 3h. The latter method was meant to reflect the traditional cooking method employed in countries where preparation of prerigor meat is practiced. The time postmortem at which the meat was prepared had a large effect on the tenderness (shear force) of the meat (P<0.01). Cooking prerigor and at rigor meat to 70 °C resulted in higher shear force values than their post rigor counterparts at 1 and 7 days p.m. (9.4 and 9.6 vs. 7.2 and 3.7 kg, respectively). The differences in tenderness between the treatment groups could be largely explained by a difference in contraction status of the meat after cooking and the effect of ageing on tenderness. Cooking pre and at rigor meat resulted in severe muscle contraction as evidenced by the differences in sarcomere length of the cooked samples. Mean sarcomere lengths in the pre and at rigor samples ranged from 1.05 to 1.20 μm. The mean sarcomere length in the post rigor samples was 1.44 μm. Cooking for 3 h at 100 °C did improve the tenderness of pre and at rigor prepared meat as compared to cooking to 70 °C, but not to the extent that ageing did. It is concluded that additional intervention methods are needed to improve the tenderness of prerigor cooked meat. Copyright © 2011 Elsevier B.V. All rights reserved.
A Case Study of Resources Management Planning with Multiple Objectives and Projects
David L. Peterson; David G. Silsbee; Daniel L. Schmoldt
1995-01-01
Each National Park Service unit in the United States produces a resources management plan (RMP) every four years or less. The plans commit budgets and personnel to specific projects for four years, but they are prepared with little quantitative and analytical rigor and without formal decisionmaking tools. We have previously described a multiple objective planning...
Strategic and tactiocal planning for managing national park resources
Daniel L. Schmoldt; David L. Peterson
2001-01-01
Each National Park Service unit in the United States produces a resource management plan (RMP) every four years or less. These plans constitute a strategic agenda for a park. Later, tactical plans commit budgets and personnel to specific projects over the planning horizon. Yet, neither planning stage incorporates much quantitative and analytical rigor and is devoid of...
Hertzian Dipole Radiation over Isotropic Magnetodielectric Substrates
2015-03-01
Analytical and numerical techniques in the Green’s function treatment of microstrip antennas and scatterers. IEE Proceedings. March 1983:130(2). 3...public release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report investigates dipole antennas printed on grounded...engineering of thin planar antennas . Since these materials often require complicated constitutive equations to describe their properties rigorously, the
The Accuracy of Aggregate Student Growth Percentiles as Indicators of Educator Performance
ERIC Educational Resources Information Center
Castellano, Katherine E.; McCaffrey, Daniel F.
2017-01-01
Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…
Mario Bunge's Materialist Theory of Mind and Contemporary Cognitive Science
ERIC Educational Resources Information Center
Slezak, Peter
2012-01-01
Bunge's writings on the mind-body problem provide a rigorous, analytical antidote to the persistent anti-materialist tendency that has characterized the history of philosophy and science. Bunge gives special attention to dualism and its shortcomings, and this attention is welcome in view of the resurgence of the doctrine today. However, I focus my…
The Community College Mission: History and Theory, 1930-2000. Working Paper Series. Number 1-09
ERIC Educational Resources Information Center
Meier, Kenneth M.
2008-01-01
There is a significant omission in the literature concerning the historical origins of the community college and the social and educational forces that have shaped its mission. Rigorous historical studies are relatively rare in higher education. For community colleges, analytical histories are even less common (Ratcliff, 1987; Frye, 1991, 1992).…
ERIC Educational Resources Information Center
Pavel, Ioana E.; Alnajjar, Khadijeh S.; Monahan, Jennifer L.; Stahler, Adam; Hunter, Nora E.; Weaver, Kent M.; Baker, Joshua D.; Meyerhoefer, Allie J.; Dolson, David A.
2012-01-01
A novel laboratory experiment was successfully implemented for undergraduate and graduate students in physical chemistry and nanotechnology. The main goal of the experiment was to rigorously determine the surface-enhanced Raman scattering (SERS)-based sensing capabilities of colloidal silver nanoparticles (AgNPs). These were quantified by…
Mechanical properties of frog skeletal muscles in iodoacetic acid rigor.
Mulvany, M J
1975-01-01
1. Methods have been developed for describing the length: tension characteristics of frog skeletal muscles which go into rigor at 4 degrees C following iodoacetic acid poisoning either in the presence of Ca2+ (Ca-rigor) or its absence (Ca-free-rigor). 2. Such rigor muscles showed less resistance to slow stretch (slow rigor resistance) that to fast stretch (fast rigor resistance). The slow and fast rigor resistances of Ca-free-rigor muscles were much lower than those of Ca-rigor muscles. 3. The slow rigor resistance of Ca-rigor muscles was proportional to the amount of overlap between the contractile filaments present when the muscles were put into rigor. 4. Withdrawing Ca2+ from Ca-rigor muscles (induced-Ca-free rigor) reduced their slow and fast rigor resistances. Readdition of Ca2+ (but not Mg2+, Mn2+ or Sr2+) reversed the effect. 5. The slow and fast rigor resistances of Ca-rigor muscles (but not of Ca-free-rigor muscles) decreased with time. 6.The sarcomere structure of Ca-rigor and induced-Ca-free rigor muscles stretched by 0.2lo was destroyed in proportion to the amount of stretch, but the lengths of the remaining intact sarcomeres were essentially unchanged. This suggests that there had been a successive yielding of the weakeast sarcomeres. 7. The difference between the slow and fast rigor resistance and the effect of calcium on these resistances are discussed in relation to possible variations in the strength of crossbridges between the thick and thin filaments. Images Plate 1 Plate 2 PMID:1082023
Treatment of charge singularities in implicit solvent models.
Geng, Weihua; Yu, Sining; Wei, Guowei
2007-09-21
This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2 A for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.
Treatment of charge singularities in implicit solvent models
NASA Astrophysics Data System (ADS)
Geng, Weihua; Yu, Sining; Wei, Guowei
2007-09-01
This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2Å for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.
Engineering education as a complex system
NASA Astrophysics Data System (ADS)
Gattie, David K.; Kellam, Nadia N.; Schramski, John R.; Walther, Joachim
2011-12-01
This paper presents a theoretical basis for cultivating engineering education as a complex system that will prepare students to think critically and make decisions with regard to poorly understood, ill-structured issues. Integral to this theoretical basis is a solution space construct developed and presented as a benchmark for evaluating problem-solving orientations that emerge within students' thinking as they progress through an engineering curriculum. It is proposed that the traditional engineering education model, while analytically rigorous, is characterised by properties that, although necessary, are insufficient for preparing students to address complex issues of the twenty-first century. A Synthesis and Design Studio model for engineering education is proposed, which maintains the necessary rigor of analysis within a uniquely complex yet sufficiently structured learning environment.
Rapid measurement of field-saturated hydraulic conductivity for areal characterization
Nimmo, J.R.; Schmidt, K.M.; Perkins, K.S.; Stock, J.D.
2009-01-01
To provide an improved methodology for characterizing the field-saturated hydraulic conductivity (Kfs) over broad areas with extreme spatial variability and ordinary limitations of time and resources, we developed and tested a simplified apparatus and procedure, correcting mathematically for the major deficiencies of the simplified implementation. The methodology includes use of a portable, falling-head, small-diameter (???20 cm) single-ring infiltrometer and an analytical formula for Kfs that compensates both for nonconstant falling head and for the subsurface radial spreading that unavoidably occurs with small ring size. We applied this method to alluvial fan deposits varying in degree of pedogenic maturity in the arid Mojave National Preserve, California. The measurements are consistent with a more rigorous and time-consuming Kfs measurement method, produce the expected systematic trends in Kfs when compared among soils of contrasting degrees of pedogenic development, and relate in expected ways to results of widely accepted methods. ?? Soil Science Society of America. All rights reserved.
Thermodynamics of reformulated automotive fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.
1995-06-01
Two methods for predicting Reid vapor pressure (Rvp) and initial vapor emissions of reformulated gasoline blends that contain one or more oxygenated compounds show excellent agreement with experimental data. In the first method, method A, D-86 distillation data for gasoline blends are used for predicting Rvp from a simulation of the mini dry vapor pressure equivalent (Dvpe) experiment. The other method, method B, relies on analytical information (PIANO analyses) of the base gasoline and uses classical thermodynamics for simulating the same Rvp equivalent (Rvpe) mini experiment. Method B also predicts composition and other properties for the fuel`s initial vapor emission.more » Method B, although complex, is more useful in that is can predict properties of blends without a D-86 distillation. An important aspect of method B is its capability to predict composition of initial vapor emissions from gasoline blends. Thus, it offers a powerful tool to planners of gasoline blending. Method B uses theoretically sound formulas, rigorous thermodynamic routines and uses data and correlations of physical properties that are in the public domain. Results indicate that predictions made with both methods agree very well with experimental values of Dvpe. Computer simulation methods were programmed and tested.« less
Parandekar, Priya V; Hratchian, Hrant P; Raghavachari, Krishnan
2008-10-14
Hybrid QM:QM (quantum mechanics:quantum mechanics) and QM:MM (quantum mechanics:molecular mechanics) methods are widely used to calculate the electronic structure of large systems where a full quantum mechanical treatment at a desired high level of theory is computationally prohibitive. The ONIOM (our own N-layer integrated molecular orbital molecular mechanics) approximation is one of the more popular hybrid methods, where the total molecular system is divided into multiple layers, each treated at a different level of theory. In a previous publication, we developed a novel QM:QM electronic embedding scheme within the ONIOM framework, where the model system is embedded in the external Mulliken point charges of the surrounding low-level region to account for the polarization of the model system wave function. Therein, we derived and implemented a rigorous expression for the embedding energy as well as analytic gradients that depend on the derivatives of the external Mulliken point charges. In this work, we demonstrate the applicability of our QM:QM method with point charge embedding and assess its accuracy. We study two challenging systems--zinc metalloenzymes and silicon oxide cages--and demonstrate that electronic embedding shows significant improvement over mechanical embedding. We also develop a modified technique for the energy and analytic gradients using a generalized asymmetric Mulliken embedding method involving an unequal splitting of the Mulliken overlap populations to offer improvement in situations where the Mulliken charges may be deficient.
Using Framework Analysis in nursing research: a worked example.
Ward, Deborah J; Furber, Christine; Tierney, Stephanie; Swallow, Veronica
2013-11-01
To demonstrate Framework Analysis using a worked example and to illustrate how criticisms of qualitative data analysis including issues of clarity and transparency can be addressed. Critics of the analysis of qualitative data sometimes cite lack of clarity and transparency about analytical procedures; this can deter nurse researchers from undertaking qualitative studies. Framework Analysis is flexible, systematic, and rigorous, offering clarity, transparency, an audit trail, an option for theme-based and case-based analysis and for readily retrievable data. This paper offers further explanation of the process undertaken which is illustrated with a worked example. Data were collected from 31 nursing students in 2009 using semi-structured interviews. The data collected are not reported directly here but used as a worked example for the five steps of Framework Analysis. Suggestions are provided to guide researchers through essential steps in undertaking Framework Analysis. The benefits and limitations of Framework Analysis are discussed. Nurses increasingly use qualitative research methods and need to use an analysis approach that offers transparency and rigour which Framework Analysis can provide. Nurse researchers may find the detailed critique of Framework Analysis presented in this paper a useful resource when designing and conducting qualitative studies. Qualitative data analysis presents challenges in relation to the volume and complexity of data obtained and the need to present an 'audit trail' for those using the research findings. Framework Analysis is an appropriate, rigorous and systematic method for undertaking qualitative analysis. © 2013 Blackwell Publishing Ltd.
Scientific Data Analysis Toolkit: A Versatile Add-in to Microsoft Excel for Windows
ERIC Educational Resources Information Center
Halpern, Arthur M.; Frye, Stephen L.; Marzzacco, Charles J.
2018-01-01
Scientific Data Analysis Toolkit (SDAT) is a rigorous, versatile, and user-friendly data analysis add-in application for Microsoft Excel for Windows (PC). SDAT uses the familiar Excel environment to carry out most of the analytical tasks used in data analysis. It has been designed for student use in manipulating and analyzing data encountered in…
ERIC Educational Resources Information Center
Chan, Joseph; To, Ho-Pong; Chan, Elaine
2006-01-01
Despite its growing currency in academic and policy circles, social cohesion is a term in need of a clearer and more rigorous definition. This article provides a critical review of the ways social cohesion has been conceptualized in the literature in many cases, definitions are too loosely made, with a common confusion between the content and the…
Rigor in Agricultural Education Research Reporting: Implications for the Discipline
ERIC Educational Resources Information Center
Fuhrman, Nicholas E.; Ladewig, Howard
2008-01-01
Agricultural education has been criticized for publishing research lacking many of the rigorous qualities found in publications of other disciplines. A few agricultural education researchers have suggested strategies for improving the rigor with which agricultural education studies report on methods and findings. The purpose of this study was to…
Krompecher, T; Bergerioux, C; Brandt-Casadevall, C; Gujer, H R
1983-07-01
The evolution of rigor mortis was studied in cases of nitrogen asphyxia, drowning and strangulation, as well as in fatal intoxications due to strychnine, carbon monoxide and curariform drugs, using a modified method of measurement. Our experiments demonstrated that: (1) Strychnine intoxication hastens the onset and passing of rigor mortis. (2) CO intoxication delays the resolution of rigor mortis. (3) The intensity of rigor may vary depending upon the cause of death. (4) If the stage of rigidity is to be used to estimate the time of death, it is necessary: (a) to perform a succession of objective measurements of rigor mortis intensity; and (b) to verify the eventual presence of factors that could play a role in the modification of its development.
Regulatory observations in bioanalytical determinations.
Viswanathan, C T
2010-07-01
The concept of measuring analytes in biological media is a long-established area of the quantitative sciences that is employed in many sectors. While academic research and R&D units of private firms have been in the forefront of developing complex methodologies, it is the regulatory environment that has brought the focus and rigor to the quality control of the quantitative determination of drug concentration in biological samples. In this article, the author examines the regulatory findings discovered during the course of several years of auditing bioanalytical work. The outcomes of these findings underscore the importance of quality method validation to ensure the reliability of the data generated. The failure to ensure the reliability of these data can lead to potential risks in the health management of millions of people in the USA.
Quality systems in veterinary diagnostics laboratories.
de Branco, Freitas Maia L M
2007-01-01
Quality assurance of services provided by veterinary diagnostics laboratories is a fundamental element promoted by international animal health organizations to establish trust, confidence and transparency needed for the trade of animals and their products at domestic and international levels. It requires, among other things, trained personnel, consistent and rigorous methodology, choice of suitable methods as well as appropriate calibration and traceability procedures. An important part of laboratory quality management is addressed by ISO/IEC 17025, which aims to facilitate cooperation among laboratories and their associated parties by assuring the generation of credible and consistent information derived from analytical results. Currently, according to OIE recommendation, veterinary diagnostics laboratories are only subject to voluntary compliance with standard ISO/IEC 17025; however, it is proposed here that OIE reference laboratories and collaboration centres strongly consider its adoption.
Design and fabrication of a polarization-independent two-port beam splitter.
Feng, Jijun; Zhou, Changhe; Zheng, Jiangjun; Cao, Hongchao; Lv, Peng
2009-10-10
We design and manufacture a fused-silica polarization-independent two-port beam splitter grating. The physical mechanism of this deeply etched grating can be shown clearly by using the simplified modal method with consideration of corresponding accumulated phase difference of two excited propagating grating modes, which illustrates that the binary-phase fused-silica grating structure depends little on the incident wavelength, but mainly on the ratio of groove depth to grating period and the ratio of incident wavelength to grating period. These analytic results would also be very helpful for wavelength bandwidth analysis. The exact grating profile is optimized by using the rigorous coupled-wave analysis. Holographic recording technology and inductively coupled plasma etching are used to manufacture the fused-silica grating. Experimental results agree well with the theoretical values.
ERIC Educational Resources Information Center
Roldan, Alberto
2010-01-01
The purpose of this study was to examine and document whether there is a correlation between relevance (applicability) focused courses and rigor (scholarly research) focused courses with pedagogical instructional methods or andragogical instructional methods in undergraduate business schools, and how it affects learning behavior and final course…
The MIXED framework: A novel approach to evaluating mixed-methods rigor.
Eckhardt, Ann L; DeVon, Holli A
2017-10-01
Evaluation of rigor in mixed-methods (MM) research is a persistent challenge due to the combination of inconsistent philosophical paradigms, the use of multiple research methods which require different skill sets, and the need to combine research at different points in the research process. Researchers have proposed a variety of ways to thoroughly evaluate MM research, but each method fails to provide a framework that is useful for the consumer of research. In contrast, the MIXED framework is meant to bridge the gap between an academic exercise and practical assessment of a published work. The MIXED framework (methods, inference, expertise, evaluation, and design) borrows from previously published frameworks to create a useful tool for the evaluation of a published study. The MIXED framework uses an experimental eight-item scale that allows for comprehensive integrated assessment of MM rigor in published manuscripts. Mixed methods are becoming increasingly prevalent in nursing and healthcare research requiring researchers and consumers to address issues unique to MM such as evaluation of rigor. © 2017 John Wiley & Sons Ltd.
Performance Comparison of SDN Solutions for Switching Dedicated Long-Haul Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S
2016-01-01
We consider scenarios with two sites connected over a dedicated, long-haul connection that must quickly fail-over in response to degradations in host-to-host application performance. We present two methods for path fail-over using OpenFlowenabled switches: (a) a light-weight method that utilizes host scripts to monitor the application performance and dpctl API for switching, and (b) a generic method that uses two OpenDaylight (ODL) controllers and REST interfaces. The restoration dynamics of the application contain significant statistical variations due to the controllers, north interfaces and switches; in addition, the variety of vendor implementations further complicates the choice between different solutions. We presentmore » the impulse-response method to estimate the regressions of performance parameters, which enables a rigorous and objective comparison of different solutions. We describe testing results of the two methods, using TCP throughput and connection rtt as main parameters, over a testbed consisting of HP and Cisco switches connected over longhaul connections emulated in hardware by ANUE devices. The combination of analytical and experimental results demonstrates that dpctl method responds seconds faster than ODL method on average, while both methods restore TCP throughput.« less
The Lyapunov dimension and its estimation via the Leonov method
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.
2016-06-01
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.
(U) An Analytic Study of Piezoelectric Ejecta Mass Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
2017-02-16
We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less
Cooperative effects in spherical spasers: Ab initio analytical model
NASA Astrophysics Data System (ADS)
Bordo, V. G.
2017-06-01
A fully analytical semiclassical theory of cooperative optical processes which occur in an ensemble of molecules embedded in a spherical core-shell nanoparticle is developed from first principles. Both the plasmonic Dicke effect and spaser generation are investigated for the designs in which a shell/core contains an arbitrarily large number of active molecules in the vicinity of a metallic core/shell. An essential aspect of the theory is an ab initio account of the feedback from the core/shell boundaries which significantly modifies the molecular dynamics. The theory provides rigorous, albeit simple and physically transparent, criteria for both plasmonic superradiance and surface plasmon generation.
The identification of van Hiele level students on the topic of space analytic geometry
NASA Astrophysics Data System (ADS)
Yudianto, E.; Sunardi; Sugiarti, T.; Susanto; Suharto; Trapsilasiwi, D.
2018-03-01
Geometry topics are still considered difficult by most students. Therefore, this study focused on the identification of students related to van Hiele levels. The task used from result of the development of questions related to analytical geometry of space. The results of the work involving 78 students who worked on these questions covered 11.54% (nine students) classified on a visual level; 5.13% (four students) on analysis level; 1.28% (one student) on informal deduction level; 2.56% (two students) on deduction and 2.56% (two students) on rigor level, and 76.93% (sixty students) classified on the pre-visualization level.
Technical, analytical and computer support
NASA Technical Reports Server (NTRS)
1972-01-01
The development of a rigorous mathematical model for the design and performance analysis of cylindrical silicon-germanium thermoelectric generators is reported that consists of two parts, a steady-state (static) and a transient (dynamic) part. The material study task involves the definition and implementation of a material study that aims to experimentally characterize the long term behavior of the thermoelectric properties of silicon-germanium alloys as a function of temperature. Analytical and experimental efforts are aimed at the determination of the sublimation characteristics of silicon germanium alloys and the study of sublimation effects on RTG performance. Studies are also performed on a variety of specific topics on thermoelectric energy conversion.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
Forster, B; Ropohl, D; Raule, P
1977-07-05
The manual examination of rigor mortis as currently used and its often subjective evaluation frequently produced highly incorrect deductions. It is therefore desirable that such inaccuracies should be replaced by the objective measuring of rigor mortis at the extremities. To that purpose a method is described which can also be applied in on-the-spot investigations and a new formula for the determination of rigor mortis--indices (FRR) is introduced.
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth
2010-06-15
A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.
Relativistic electron kinetic effects on laser diagnostics in burning plasmas
NASA Astrophysics Data System (ADS)
Mirnov, V. V.; Den Hartog, D. J.
2018-02-01
Toroidal interferometry/polarimetry (TIP), poloidal polarimetry (PoPola), and Thomson scattering systems (TS) are major optical diagnostics being designed and developed for ITER. Each of them relies upon a sophisticated quantitative understanding of the electron response to laser light propagating through a burning plasma. Review of the theoretical results for two different applications is presented: interferometry/polarimetry (I/P) and polarization of Thomson scattered light, unified by the importance of relativistic (quadratic in vTe/c) electron kinetic effects. For I/P applications, rigorous analytical results are obtained perturbatively by expansion in powers of the small parameter τ = Te/me c2, where Te is electron temperature and me is electron rest mass. Experimental validation of the analytical models has been made by analyzing data of more than 1200 pulses collected from high-Te JET discharges. Based on this validation the relativistic analytical expressions are included in the error analysis and design projects of the ITER TIP and PoPola systems. The polarization properties of incoherent Thomson scattered light are being examined as a method of Te measurement relevant to ITER operational regimes. The theory is based on Stokes vector transformation and Mueller matrices formalism. The general approach is subdivided into frequency-integrated and frequency-resolved cases. For each of them, the exact analytical relativistic solutions are presented in the form of Mueller matrix elements averaged over the relativistic Maxwellian distribution function. New results related to the detailed verification of the frequency-resolved solutions are reported. The precise analytic expressions provide output much more rapidly than relativistic kinetic numerical codes allowing for direct real-time feedback control of ITER device operation.
Numerical simulation of separated flows. Ph.D. Thesis - Stanford Univ., Calif.
NASA Technical Reports Server (NTRS)
Spalart, P. R.; Leonard, A.; Baganoff, D.
1983-01-01
A new numerical method, based on the Vortex Method, for the simulation of two-dimensional separated flows, was developed and tested on a wide range of gases. The fluid is incompressible and the Reynolds number is high. A rigorous analytical basis for the representation of the Navier-Stokes equation in terms of the vorticity is used. An equation for the control of circulation around each body is included. An inviscid outer flow (computed by the Vortex Method) was coupled with a viscous boundary layer flow (computed by an Eulerian method). This version of the Vortex Method treats bodies of arbitrary shape, and accurately computes the pressure and shear stress at the solid boundary. These two quantities reflect the structure of the boundary layer. Several versions of the method are presented and applied to various problems, most of which have massive separation. Comparison of its results with other results, generally experimental, demonstrates the reliability and the general accuracy of the new method, with little dependence on empirical parameters. Many of the complex features of the flow past a circular cylinder, over a wide range of Reynolds numbers, are correctly reproduced.
ERIC Educational Resources Information Center
Shakeel, M. Danish; Anderson, Kaitlin P.; Wolf, Patrick J.
2016-01-01
The objective of this meta-analysis is to rigorously assess the participant effects of private school vouchers, or in other words, to estimate the average academic impacts that the offer (or use) of a voucher has on a student. This review adds to the literature by being the first to systematically review all Randomized Control Trials (RCTs) in an…
ERIC Educational Resources Information Center
Shager, Hilary M.; Schindler, Holly S.; Magnuson, Katherine A.; Duncan, Greg J.; Yoshikawa, Hirokazu; Hart, Cassandra M. D.
2013-01-01
This study explores the extent to which differences in research design explain variation in Head Start program impacts. We employ meta-analytic techniques to predict effect sizes for cognitive and achievement outcomes as a function of the type and rigor of research design, quality and type of outcome measure, activity level of control group, and…
Analytical calculation on the determination of steep side wall angles from far field measurements
NASA Astrophysics Data System (ADS)
Cisotto, Luca; Pereira, Silvania F.; Urbach, H. Paul
2018-06-01
In the semiconductor industry, the performance and capabilities of the lithographic process are evaluated by measuring specific structures. These structures are often gratings of which the shape is described by a few parameters such as period, middle critical dimension, height, and side wall angle (SWA). Upon direct measurement or retrieval of these parameters, the determination of the SWA suffers from considerable inaccuracies. Although the scattering effects that steep SWAs have on the illumination can be obtained with rigorous numerical simulations, analytical models constitute a very useful tool to get insights into the problem we are treating. In this paper, we develop an approach based on analytical calculations to describe the scattering of a cliff and a ridge with steep SWAs. We also propose a detection system to determine the SWAs of the structures.
Accuracy and performance of 3D mask models in optical projection lithography
NASA Astrophysics Data System (ADS)
Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar
2011-04-01
Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.
NASA Technical Reports Server (NTRS)
Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola
2005-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
Le, Minh Uyen Thi; Son, Jin Gyeong; Shon, Hyun Kyoung; Park, Jeong Hyang; Lee, Sung Bae; Lee, Tae Geol
2018-03-30
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging elucidates molecular distributions in tissue sections, providing useful information about the metabolic pathways linked to diseases. However, delocalization of the analytes and inadequate tissue adherence during sample preparation are among some of the unfortunate phenomena associated with this technique due to their role in the reduction of the quality, reliability, and spatial resolution of the ToF-SIMS images. For these reasons, ToF-SIMS imaging requires a more rigorous sample preparation method in order to preserve the natural state of the tissues. The traditional thaw-mounting method is particularly vulnerable to altered distributions of the analytes due to thermal effects, as well as to tissue shrinkage. In the present study, the authors made comparisons of different tissue mounting methods, including the thaw-mounting method. The authors used conductive tape as the tissue-mounting material on the substrate because it does not require heat from the finger for the tissue section to adhere to the substrate and can reduce charge accumulation during data acquisition. With the conductive-tape sampling method, they were able to acquire reproducible tissue sections and high-quality images without redistribution of the molecules. Also, the authors were successful in preserving the natural states and chemical distributions of the different components of fat metabolites such as diacylglycerol and fatty acids by using the tape-supported sampling in microRNA-14 (miR-14) deleted Drosophila models. The method highlighted here shows an improvement in the accuracy of mass spectrometric imaging of tissue samples.
Adaptation of the Conditions of US EPA Method 538 for the ...
Report The objective of this study was to evaluate U.S. EPA’s Method 538 for the assessment of drinking water exposure to the nerve agent degradation product, EA2192, the most toxic degradation product of nerve agent VX. As a result of the similarities in sample preparation and analysis that Method 538 uses for nonvolatile chemicals, this method is applicable to the nonvolatile Chemical Warfare Agent (CWA) degradation product, EA2192, in drinking water. The method may be applicable to other nonvolatile CWAs and their respective degradation products as well, but the method will need extensive testing to verify compatibility. Gaps associated with the need for analysis methods capable of analyzing such analytes were addressed by adapting the EPA 538 method for this CWA degradation product. Many laboratories have the experience and capability to run the already rigorous method for nonvolatile compounds in drinking water. Increasing the number of laboratories capable of carrying out these methods serves to significantly increase the surge laboratory capacity to address sample throughput during a large exposure event. The approach desired for this study was to start with a proven high performance liquid chromatography tandem mass spectrometry (HPLC/MS/MS) method for nonvolatile chemicals in drinking water and assess the inclusion of a similar nonvolatile chemical, EA2192.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-01-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272
Lomiwes, D; Reis, M M; Wiklund, E; Young, O A; North, M
2010-12-01
The potential of near infrared (NIR) spectroscopy as an on-line method to quantify glycogen and predict ultimate pH (pH(u)) of pre rigor beef M. longissimus dorsi (LD) was assessed. NIR spectra (538 to 1677 nm) of pre rigor LD from steers, cows and bulls were collected early post mortem and measurements were made for pre rigor glycogen concentration and pH(u). Spectral and measured data were combined to develop models to quantify glycogen and predict the pH(u) of pre rigor LD. NIR spectra and pre rigor predicted values obtained from quantitative models were shown to be poorly correlated against glycogen and pH(u) (r(2)=0.23 and 0.20, respectively). Qualitative models developed to categorize each muscle according to their pH(u) were able to correctly categorize 42% of high pH(u) samples. Optimum qualitative and quantitative models derived from NIR spectra found low correlation between predicted values and reference measurements. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd.. All rights reserved.
Ridenour, Ty A; Pineo, Thomas Z; Maldonado Molina, Mildred M; Hassmiller Lich, Kristen
2013-06-01
Psychosocial prevention research lacks evidence from intensive within-person lines of research to understand idiographic processes related to development and response to intervention. Such data could be used to fill gaps in the literature and expand the study design options for prevention researchers, including lower-cost yet rigorous studies (e.g., for program evaluations), pilot studies, designs to test programs for low prevalence outcomes, selective/indicated/adaptive intervention research, and understanding of differential response to programs. This study compared three competing analytic strategies designed for this type of research: autoregressive moving average, mixed model trajectory analysis, and P-technique. Illustrative time series data were from a pilot study of an intervention for nursing home residents with diabetes (N = 4) designed to improve control of blood glucose. A within-person, intermittent baseline design was used. Intervention effects were detected using each strategy for the aggregated sample and for individual patients. The P-technique model most closely replicated observed glucose levels. ARIMA and P-technique models were most similar in terms of estimated intervention effects and modeled glucose levels. However, ARIMA and P-technique also were more sensitive to missing data, outliers and number of observations. Statistical testing suggested that results generalize both to other persons as well as to idiographic, longitudinal processes. This study demonstrated the potential contributions of idiographic research in prevention science as well as the need for simulation studies to delineate the research circumstances when each analytic approach is optimal for deriving the correct parameter estimates.
Pineo, Thomas Z.; Maldonado Molina, Mildred M.; Lich, Kristen Hassmiller
2013-01-01
Psychosocial prevention research lacks evidence from intensive within-person lines of research to understand idiographic processes related to development and response to intervention. Such data could be used to fill gaps in the literature and expand the study design options for prevention researchers, including lower-cost yet rigorous studies (e.g., for program evaluations), pilot studies, designs to test programs for low prevalence outcomes, selective/indicated/ adaptive intervention research, and understanding of differential response to programs. This study compared three competing analytic strategies designed for this type of research: autoregressive moving average, mixed model trajectory analysis, and P-technique. Illustrative time series data were from a pilot study of an intervention for nursing home residents with diabetes (N=4) designed to improve control of blood glucose. A within-person, intermittent baseline design was used. Intervention effects were detected using each strategy for the aggregated sample and for individual patients. The P-technique model most closely replicated observed glucose levels. ARIMA and P-technique models were most similar in terms of estimated intervention effects and modeled glucose levels. However, ARIMA and P-technique also were more sensitive to missing data, outliers and number of observations. Statistical testing suggested that results generalize both to other persons as well as to idiographic, longitudinal processes. This study demonstrated the potential contributions of idiographic research in prevention science as well as the need for simulation studies to delineate the research circumstances when each analytic approach is optimal for deriving the correct parameter estimates. PMID:23299558
2018-01-01
Signaling pathways represent parts of the global biological molecular network which connects them into a seamless whole through complex direct and indirect (hidden) crosstalk whose structure can change during development or in pathological conditions. We suggest a novel methodology, called Googlomics, for the structural analysis of directed biological networks using spectral analysis of their Google matrices, using parallels with quantum scattering theory, developed for nuclear and mesoscopic physics and quantum chaos. We introduce analytical “reduced Google matrix” method for the analysis of biological network structure. The method allows inferring hidden causal relations between the members of a signaling pathway or a functionally related group of genes. We investigate how the structure of hidden causal relations can be reprogrammed as a result of changes in the transcriptional network layer during cancerogenesis. The suggested Googlomics approach rigorously characterizes complex systemic changes in the wiring of large causal biological networks in a computationally efficient way. PMID:29370181
Material Barriers to Diffusive Mixing
NASA Astrophysics Data System (ADS)
Haller, George; Karrasch, Daniel
2017-11-01
Transport barriers, as zero-flux surfaces, are ill-defined in purely advective mixing in which the flux of any passive scalar is zero through all material surfaces. For this reason, Lagrangian Coherent Structures (LCSs) have been argued to play the role of mixing barriers as most repelling, attracting or shearing material lines. These three kinematic concepts, however, can also be defined in different ways, both within rigorous mathematical treatments and within the realm of heuristic diagnostics. This has lead to a an ever-growing number of different LCS methods, each generally identifying different objects as transport barriers. In this talk, we examine which of these methods have actual relevance for diffusive transport barriers. The latter barriers are arguably the practically relevant inhibitors in the mixing of physically relevant tracers, such as temperature, salinity, vorticity or potential vorticity. We demonstrate the role of the most effective diffusion barriers in analytical examples and observational data. Supported in part by the DFG Priority Program on Turbulent Superstructures.
NASA Astrophysics Data System (ADS)
Sun, Yimin; Verschuur, Eric; van Borselen, Roald
2018-03-01
The Rayleigh integral solution of the acoustic Helmholtz equation in a homogeneous medium can only be applied when the integral surface is a planar surface, while in reality almost all surfaces where pressure waves are measured exhibit some curvature. In this paper we derive a theoretically rigorous way of building propagation operators for pressure waves on an arbitrarily curved surface. Our theory is still based upon the Rayleigh integral, but it resorts to matrix inversion to overcome the limitations faced by the Rayleigh integral. Three examples are used to demonstrate the correctness of our theory - propagation of pressure waves acquired on an arbitrarily curved surface to a planar surface, on an arbitrarily curved surface to another arbitrarily curved surface, and on a spherical cap to a planar surface, and results agree well with the analytical solutions. The generalization of our method for particle velocities and the calculation cost of our method are also discussed.
Confinement with Perturbation Theory, After All?
NASA Astrophysics Data System (ADS)
Hoyer, Paul
2015-09-01
I call attention to the possibility that QCD bound states (hadrons) could be derived using rigorous Hamiltonian, perturbative methods. Solving Gauss' law for A 0 with a non-vanishing boundary condition at spatial infinity gives an linear potential for color singlet and qqq states. These states are Poincaré and gauge covariant and thus can serve as initial states of a perturbative expansion, replacing the conventional free in and out states. The coupling freezes at , allowing reasonable convergence. The bound states have a sea of pairs, while transverse gluons contribute only at . Pair creation in the linear A 0 potential leads to string breaking and hadron loop corrections. These corrections give finite widths to excited states, as required by unitarity. Several of these features have been verified analytically in D = 1 + 1 dimensions, and some in D = 3 + 1.
Inspection method for the identification of TBT-containing antifouling paints.
Senda, Tetsuya; Miyata, Osamu; Kihara, Takeshi; Yamada, Yasujiro
2003-04-01
In order to ensure the effectiveness of the international convention which will prohibit the use of organotin compounds in antifouling paints applied to ships, it is essential to establish an inspection system to determine the presence of the prohibited compounds in the paint. In the present study, a method for the identification of organotin containing antifouling paints using a two-stage analysis process is investigated. Firstly, X-ray fluorescence analysis (XRF) is utilized, which could be used at the place of ship surveys or port state control. Using a portable XRF instrument customized for ship inspection, analysis is automatically executed and determines whether tin is present or not. If the presence of tin is confirmed by XRF, the sample is subsequently examined at an analytical laboratory using more rigorous analytical techniques, such as gas chromatograph mass spectrometry (GC-MS). A sampling device has been designed. It is a disc of approximately 10 mm diameter and has abrasive paper pasted to one of its flat surfaces. The device is pressed onto and then slid along a ship hull to lightly scrape off fragments of paint onto the abrasive paper. Preliminary field tests have revealed that sampling from a ship in dock yields successful collection of the paint for XRD analysis and that the resultant damage caused to the antifouling paint surface by the sampling technique was found to be negligible.
Assessing Diet as a Modifiable Risk Factor for Pesticide Exposure
Oates, Liza; Cohen, Marc
2011-01-01
The effects of pesticides on the general population, largely as a result of dietary exposure, are unclear. Adopting an organic diet appears to be an obvious solution for reducing dietary pesticide exposure and this is supported by biomonitoring studies in children. However, results of research into the effects of organic diets on pesticide exposure are difficult to interpret in light of the many complexities. Therefore future studies must be carefully designed. While biomonitoring can account for differences in overall exposure it cannot necessarily attribute the source. Due diligence must be given to appropriate selection of participants, target pesticides and analytical methods to ensure that the data generated will be both scientifically rigorous and clinically useful, while minimising the costs and difficulties associated with biomonitoring studies. Study design must also consider confounders such as the unpredictable nature of chemicals and inter- and intra-individual differences in exposure and other factors that might influence susceptibility to disease. Currently the most useful measures are non-specific urinary metabolites that measure a range of organophosphate and synthetic pyrethroid insecticides. These pesticides are in common use, frequently detected in population studies and may provide a broader overview of the impact of an organic diet on pesticide exposure than pesticide-specific metabolites. More population based studies are needed for comparative purposes and improvements in analytical methods are required before many other compounds can be considered for assessment. PMID:21776202
Assessing diet as a modifiable risk factor for pesticide exposure.
Oates, Liza; Cohen, Marc
2011-06-01
The effects of pesticides on the general population, largely as a result of dietary exposure, are unclear. Adopting an organic diet appears to be an obvious solution for reducing dietary pesticide exposure and this is supported by biomonitoring studies in children. However, results of research into the effects of organic diets on pesticide exposure are difficult to interpret in light of the many complexities. Therefore future studies must be carefully designed. While biomonitoring can account for differences in overall exposure it cannot necessarily attribute the source. Due diligence must be given to appropriate selection of participants, target pesticides and analytical methods to ensure that the data generated will be both scientifically rigorous and clinically useful, while minimising the costs and difficulties associated with biomonitoring studies. Study design must also consider confounders such as the unpredictable nature of chemicals and inter- and intra-individual differences in exposure and other factors that might influence susceptibility to disease. Currently the most useful measures are non-specific urinary metabolites that measure a range of organophosphate and synthetic pyrethroid insecticides. These pesticides are in common use, frequently detected in population studies and may provide a broader overview of the impact of an organic diet on pesticide exposure than pesticide-specific metabolites. More population based studies are needed for comparative purposes and improvements in analytical methods are required before many other compounds can be considered for assessment.
Comparison of analytical methods for profiling N- and O-linked glycans from cultured cell lines
Togayachi, Akira; Azadi, Parastoo; Ishihara, Mayumi; Geyer, Rudolf; Galuska, Christina; Geyer, Hildegard; Kakehi, Kazuaki; Kinoshita, Mitsuhiro; Karlsson, Niclas G.; Jin, Chunsheng; Kato, Koichi; Yagi, Hirokazu; Kondo, Sachiko; Kawasaki, Nana; Hashii, Noritaka; Kolarich, Daniel; Stavenhagen, Kathrin; Packer, Nicolle H.; Thaysen-Andersen, Morten; Nakano, Miyako; Taniguchi, Naoyuki; Kurimoto, Ayako; Wada, Yoshinao; Tajiri, Michiko; Yang, Pengyuan; Cao, Weiqian; Li, Hong; Rudd, Pauline M.; Narimatsu, Hisashi
2016-01-01
The Human Disease Glycomics/Proteome Initiative (HGPI) is an activity in the Human Proteome Organization (HUPO) supported by leading researchers from international institutes and aims at development of disease-related glycomics/glycoproteomics analysis techniques. Since 2004, the initiative has conducted three pilot studies. The first two were N- and O-glycan analyses of purified transferrin and immunoglobulin-G and assessed the most appropriate analytical approach employed at the time. This paper describes the third study, which was conducted to compare different approaches for quantitation of N- and O-linked glycans attached to proteins in crude biological samples. The preliminary analysis on cell pellets resulted in wildly varied glycan profiles, which was probably the consequence of variations in the pre-processing sample preparation methodologies. However, the reproducibility of the data was not improved dramatically in the subsequent analysis on cell lysate fractions prepared in a specified method by one lab. The study demonstrated the difficulty of carrying out a complete analysis of the glycome in crude samples by any single technology and the importance of rigorous optimization of the course of analysis from preprocessing to data interpretation. It suggests that another collaborative study employing the latest technologies in this rapidly evolving field will help to realize the requirements of carrying out the large-scale analysis of glycoproteins in complex cell samples. PMID:26511985
Correlated scattering states of N-body Coulomb systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berakdar, J.
1997-03-01
For N charged particles of equal masses moving in the field of a heavy residual charge, an approximate analytical solution of the many-body time-independent Schr{umlt o}dinger equation is derived at a total energy above the complete fragmentation threshold. All continuum particles are treated on equal footing. The proposed correlated wave function represents, to leading order, an exact solution of the many-body Schr{umlt o}dinger equation in the asymptotic region defined by large interparticle separations. Thus, in this asymptotic region the N-body Coulomb modifications to the plane-wave motion of free particles are rigorously estimated. It is shown that the Kato cusp conditionsmore » are satisfied by the derived wave function at all two-body coalescence points. An expression of the normalization of this wave function is also given. To render possible the calculations of scattering amplitudes for transitions leading to a four-body scattering state, an effective-charge method is suggested in which the correlations between the continuum particles are completely subsumed into effective interactions with the residual charge. Analytical expressions for these effective interactions are derived and discussed for physical situations. {copyright} {ital 1997} {ital The American Physical Society}« less
Astashkin, Andrei V; Feng, Changjian
2015-11-12
The production of nitric oxide by the nitric oxide synthase (NOS) enzyme depends on the interdomain electron transfer (IET) between the flavin mononucleotide (FMN) and heme domains. Although the rate of this IET has been measured by laser flash photolysis (LFP) for various NOS proteins, no rigorous analysis of the relevant kinetic equations was performed so far. In this work, we provide an analytical solution of the kinetic equations underlying the LFP approach. The derived expressions reveal that the bulk IET rate is significantly affected by the conformational dynamics that determines the formation and dissociation rates of the docking complex between the FMN and heme domains. We show that in order to informatively study the electron transfer across the NOS enzyme, LFP should be used in combination with other spectroscopic methods that could directly probe the docking equilibrium and the conformational change rate constants. The implications of the obtained analytical expressions for the interpretation of the LFP results from various native and modified NOS proteins are discussed. The mathematical formulas derived in this work should also be applicable for interpreting the IET kinetics in other modular redox enzymes.
Current to the ionosphere following a lightning stroke
NASA Technical Reports Server (NTRS)
Hale, L. C.; Baginski, M. E.
1987-01-01
A simple analytical expression for calculating the total current waveform to the ionosphere after a lightning stroke is derived. The validity of this expression is demonstrated by comparison with a more rigorous computer solution of Maxwell's equations. The analytic model demonstrates that the temporal variation of the current induced in the ionosphere and global circuit and the corresponding return current in the earth depends on the conductivity profile at intervening altitudes in the middle atmosphere. A conclusion is that capacitative coupling may provide tighter coupling between the lower atmosphere and the ionosphere than usually considered, in both directions, which may help to explain observations which seem to indicate that magnetospheric phenomena may in some instances trigger lightning.
Iterative categorization (IC): a systematic technique for analysing qualitative data
2016-01-01
Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155
NASA Astrophysics Data System (ADS)
Beneš, Michal; Pažanin, Igor
2018-03-01
This paper reports an analytical investigation of non-isothermal fluid flow in a thin (or long) vertical pipe filled with porous medium via asymptotic analysis. We assume that the fluid inside the pipe is cooled (or heated) by the surrounding medium and that the flow is governed by the prescribed pressure drop between pipe's ends. Starting from the dimensionless Darcy-Brinkman-Boussinesq system, we formally derive a macroscopic model describing the effective flow at small Brinkman-Darcy number. The asymptotic approximation is given by the explicit formulae for the velocity, pressure and temperature clearly acknowledging the effects of the cooling (heating) and porous structure. The theoretical error analysis is carried out to indicate the order of accuracy and to provide a rigorous justification of the effective model.
Hess, Jeremy J.; Ebi, Kristie L.; Markandya, Anil; Balbus, John M.; Wilkinson, Paul; Haines, Andy; Chalabi, Zaid
2014-01-01
Background: Policy decisions regarding climate change mitigation are increasingly incorporating the beneficial and adverse health impacts of greenhouse gas emission reduction strategies. Studies of such co-benefits and co-harms involve modeling approaches requiring a range of analytic decisions that affect the model output. Objective: Our objective was to assess analytic decisions regarding model framework, structure, choice of parameters, and handling of uncertainty when modeling health co-benefits, and to make recommendations for improvements that could increase policy uptake. Methods: We describe the assumptions and analytic decisions underlying models of mitigation co-benefits, examining their effects on modeling outputs, and consider tools for quantifying uncertainty. Discussion: There is considerable variation in approaches to valuation metrics, discounting methods, uncertainty characterization and propagation, and assessment of low-probability/high-impact events. There is also variable inclusion of adverse impacts of mitigation policies, and limited extension of modeling domains to include implementation considerations. Going forward, co-benefits modeling efforts should be carried out in collaboration with policy makers; these efforts should include the full range of positive and negative impacts and critical uncertainties, as well as a range of discount rates, and should explicitly characterize uncertainty. We make recommendations to improve the rigor and consistency of modeling of health co-benefits. Conclusion: Modeling health co-benefits requires systematic consideration of the suitability of model assumptions, of what should be included and excluded from the model framework, and how uncertainty should be treated. Increased attention to these and other analytic decisions has the potential to increase the policy relevance and application of co-benefits modeling studies, potentially helping policy makers to maximize mitigation potential while simultaneously improving health. Citation: Remais JV, Hess JJ, Ebi KL, Markandya A, Balbus JM, Wilkinson P, Haines A, Chalabi Z. 2014. Estimating the health effects of greenhouse gas mitigation strategies: addressing parametric, model, and valuation challenges. Environ Health Perspect 122:447–455; http://dx.doi.org/10.1289/ehp.1306744 PMID:24583270
Formal and physical equivalence in two cases in contemporary quantum physics
NASA Astrophysics Data System (ADS)
Fraser, Doreen
2017-08-01
The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation-a mathematical transformation that takes the time variable t to negative imaginary time-it-was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a "translation manual" between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.
Viswanathan, Sekarbabu; Verma, P R P; Ganesan, Muniyandithevar; Manivannan, Jeganathan
2017-07-15
Omega-3 fatty acids are clinically useful and the two marine omega-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are prevalent in fish and fish oils. Omega-3 fatty acid formulations should undergo a rigorous regulatory step in order to obtain United States Food and Drug Administration (USFDA) approval as prescription drug. In connection with that, despite quantifying EPA and DHA fatty acids, there is a need for quantifying the level of ethyl esters of them in biological samples. In this study, we make use of reverse phase high performance liquid chromatography coupled with mass spectrometry (RP-HPLC-MS)technique for the method development. Here, we have developed a novel multiple reaction monitoring method along with optimized parameters for quantification of EPA and DHA as ethyl esters. Additionally, we attempted to validate the bio-analytical method by conducting the sensitivity, selectivity, precision accuracy batch, carryover test and matrix stability experiments. Furthermore, we also implemented our validated method for evaluation of pharmacokinetics of omega fatty acid ethyl ester formulations. Copyright © 2017 Elsevier B.V. All rights reserved.
Semidefinite Relaxation-Based Optimization of Multiple-Input Wireless Power Transfer Systems
NASA Astrophysics Data System (ADS)
Lang, Hans-Dieter; Sarris, Costas D.
2017-11-01
An optimization procedure for multi-transmitter (MISO) wireless power transfer (WPT) systems based on tight semidefinite relaxation (SDR) is presented. This method ensures physical realizability of MISO WPT systems designed via convex optimization -- a robust, semi-analytical and intuitive route to optimizing such systems. To that end, the nonconvex constraints requiring that power is fed into rather than drawn from the system via all transmitter ports are incorporated in a convex semidefinite relaxation, which is efficiently and reliably solvable by dedicated algorithms. A test of the solution then confirms that this modified problem is equivalent (tight relaxation) to the original (nonconvex) one and that the true global optimum has been found. This is a clear advantage over global optimization methods (e.g. genetic algorithms), where convergence to the true global optimum cannot be ensured or tested. Discussions of numerical results yielded by both the closed-form expressions and the refined technique illustrate the importance and practicability of the new method. It, is shown that this technique offers a rigorous optimization framework for a broad range of current and emerging WPT applications.
Jim Starnes' Contributions to Residual Strength Analysis Methods for Metallic Structures
NASA Technical Reports Server (NTRS)
Young, Richard D.; Rose, Cheryl A.; Harris, Charles E.
2005-01-01
A summary of advances in residual strength analyses methods for metallic structures that were realized under the leadership of Dr. James H. Starnes, Jr., is presented. The majority of research led by Dr. Starnes in this area was conducted in the 1990's under the NASA Airframe Structural Integrity Program (NASIP). Dr. Starnes, respectfully referred to herein as Jim, had a passion for studying complex response phenomena and dedicated a significant amount of research effort toward advancing damage tolerance and residual strength analysis methods for metallic structures. Jim's efforts were focused on understanding damage propagation in built-up fuselage structure with widespread fatigue damage, with the goal of ensuring safety in the aging international commercial transport fleet. Jim's major contributions in this research area were in identifying the effects of combined internal pressure and mechanical loads, and geometric nonlinearity, on the response of built-up structures with damage. Analytical and experimental technical results are presented to demonstrate the breadth and rigor of the research conducted in this technical area. Technical results presented herein are drawn exclusively from papers where Jim was a co-author.
A new method to evaluate human-robot system performance
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Weisbin, C. R.
2003-01-01
One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.
Krompecher, T; Fryc, O
1978-01-01
The use of new methods and an appropriate apparatus has allowed us to make successive measurements of rigor mortis and a study of its evolution in the rat. By a comparative examination on the front and hind limbs, we have determined the following: (1) The muscular mass of the hind limbs is 2.89 times greater than that of the front limbs. (2) In the initial phase rigor mortis is more pronounced in the front limbs. (3) The front and hind limbs reach maximum rigor mortis at the same time and this state is maintained for 2 hours. (4) Resolution of rigor mortis is accelerated in the front limbs during the initial phase, but both front and hind limbs reach complete resolution at the same time.
Buckling of a stiff thin film on an elastic graded compliant substrate.
Chen, Zhou; Chen, Weiqiu; Song, Jizhou
2017-12-01
The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.
Buckling of a stiff thin film on an elastic graded compliant substrate
NASA Astrophysics Data System (ADS)
Chen, Zhou; Chen, Weiqiu; Song, Jizhou
2017-12-01
The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.
Are computational models of any use to psychiatry?
Huys, Quentin J M; Moutoussis, Michael; Williams, Jonathan
2011-08-01
Mathematically rigorous descriptions of key hypotheses and theories are becoming more common in neuroscience and are beginning to be applied to psychiatry. In this article two fictional characters, Dr. Strong and Mr. Micawber, debate the use of such computational models (CMs) in psychiatry. We present four fundamental challenges to the use of CMs in psychiatry: (a) the applicability of mathematical approaches to core concepts in psychiatry such as subjective experiences, conflict and suffering; (b) whether psychiatry is mature enough to allow informative modelling; (c) whether theoretical techniques are powerful enough to approach psychiatric problems; and (d) the issue of communicating clinical concepts to theoreticians and vice versa. We argue that CMs have yet to influence psychiatric practice, but that they help psychiatric research in two fundamental ways: (a) to build better theories integrating psychiatry with neuroscience; and (b) to enforce explicit, global and efficient testing of hypotheses through more powerful analytical methods. CMs allow the complexity of a hypothesis to be rigorously weighed against the complexity of the data. The paper concludes with a discussion of the path ahead. It points to stumbling blocks, like the poor communication between theoretical and medical communities. But it also identifies areas in which the contributions of CMs will likely be pivotal, like an understanding of social influences in psychiatry, and of the co-morbidity structure of psychiatric diseases. Copyright © 2011 Elsevier Ltd. All rights reserved.
How to Help Students Conceptualize the Rigorous Definition of the Limit of a Sequence
ERIC Educational Resources Information Center
Roh, Kyeong Hah
2010-01-01
This article suggests an activity, called the epsilon-strip activity, as an instructional method for conceptualization of the rigorous definition of the limit of a sequence via visualization. The article also describes the learning objectives of each instructional step of the activity, and then provides detailed instructional methods to guide…
Scientific rigor through videogames.
Treuille, Adrien; Das, Rhiju
2014-11-01
Hypothesis-driven experimentation - the scientific method - can be subverted by fraud, irreproducibility, and lack of rigorous predictive tests. A robust solution to these problems may be the 'massive open laboratory' model, recently embodied in the internet-scale videogame EteRNA. Deploying similar platforms throughout biology could enforce the scientific method more broadly. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rigorous simulations of a helical core fiber by the use of transformation optics formalism.
Napiorkowski, Maciej; Urbanczyk, Waclaw
2014-09-22
We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.
Naval Special Warfare: Identifying and Prioritizing Core Attributes of the Profession
2014-12-01
literature review supports the need for research that is designed to advance the NSW profession. Analytic rigor could inform how NSW should refine its...B. SKILLS REQUIRED - WHAT (QUESTIONS 10–25, 26, 30) This research was designed to answer the following question: “What are the practical skills...NSW PRODEV: Naval Special Warfare Professional Development This research is designed to answer the following question. What are the practical
The US Army War College: Gearing Up for the 21st Century
1988-12-01
modality. The Committee also identified a need for increased emphasis on active learning as well as greater academic rigor and challenge...and to involve them more directly in an active learning process, case studies, exercises, gaming, and analytical discussions have been increased...in the active learning process; that most challenging test which occurs when the student is perform- ing or reciting in front of his or her peers as
Disciplining Bioethics: Towards a Standard of Methodological Rigor in Bioethics Research
Adler, Daniel; Shaul, Randi Zlotnik
2012-01-01
Contemporary bioethics research is often described as multi- or interdisciplinary. Disciplines are characterized, in part, by their methods. Thus, when bioethics research draws on a variety of methods, it crosses disciplinary boundaries. Yet each discipline has its own standard of rigor—so when multiple disciplinary perspectives are considered, what constitutes rigor? This question has received inadequate attention, as there is considerable disagreement regarding the disciplinary status of bioethics. This disagreement has presented five challenges to bioethics research. Addressing them requires consideration of the main types of cross-disciplinary research, and consideration of proposals aiming to ensure rigor in bioethics research. PMID:22686634
NASA Astrophysics Data System (ADS)
Sirikham, Adisorn; Zhao, Yifan; Mehnen, Jörn
2017-11-01
Thermography is a promising method for detecting subsurface defects, but accurate measurement of defect depth is still a big challenge because thermographic signals are typically corrupted by imaging noise and affected by 3D heat conduction. Existing methods based on numerical models are susceptible to signal noise and methods based on analytical models require rigorous assumptions that usually cannot be satisfied in practical applications. This paper presents a new method to improve the measurement accuracy of subsurface defect depth through determining the thermal wave reflection coefficient directly from observed data that is usually assumed to be pre-known. This target is achieved through introducing a new heat transfer model that includes multiple physical parameters to better describe the observed thermal behaviour in pulsed thermographic inspection. Numerical simulations are used to evaluate the performance of the proposed method against four selected state-of-the-art methods. Results show that the accuracy of depth measurement has been improved up to 10% when noise level is high and thermal wave reflection coefficients is low. The feasibility of the proposed method in real data is also validated through a case study on characterising flat-bottom holes in carbon fibre reinforced polymer (CFRP) laminates which has a wide application in various sectors of industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle
2014-12-15
The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less
Rigorous Numerical Study of Low-Period Windows for the Quadratic Map
NASA Astrophysics Data System (ADS)
Galias, Zbigniew
An efficient method to find all low-period windows for the quadratic map is proposed. The method is used to obtain very accurate rigorous bounds of positions of all periodic windows with periods p ≤ 32. The contribution of period-doubling windows on the total width of periodic windows is discussed. Properties of periodic windows are studied numerically.
NASA Astrophysics Data System (ADS)
Pandey, Manoj Kumar; Ramachandran, Ramesh
2010-03-01
The application of solid-state NMR methodology for bio-molecular structure determination requires the measurement of constraints in the form of 13C-13C and 13C-15N distances, torsion angles and, in some cases, correlation of the anisotropic interactions. Since the availability of structurally important constraints in the solid state is limited due to lack of sufficient spectral resolution, the accuracy of the measured constraints become vital in studies relating the three-dimensional structure of proteins to its biological functions. Consequently, the theoretical methods employed to quantify the experimental data become important. To accentuate this aspect, we re-examine analytical two-spin models currently employed in the estimation of 13C-13C distances based on the rotational resonance (R 2) phenomenon. Although the error bars for the estimated distances tend to be in the range 0.5-1.0 Å, R 2 experiments are routinely employed in a variety of systems ranging from simple peptides to more complex amyloidogenic proteins. In this article we address this aspect by highlighting the systematic errors introduced by analytical models employing phenomenological damping terms to describe multi-spin effects. Specifically, the spin dynamics in R 2 experiments is described using Floquet theory employing two different operator formalisms. The systematic errors introduced by the phenomenological damping terms and their limitations are elucidated in two analytical models and analysed by comparing the results with rigorous numerical simulations.
NASA Technical Reports Server (NTRS)
Perry, J. L.; James, J. T.; Cole, H. E.; Limero, T. F.; Beck, S. W.
1997-01-01
Collection and analysis of spacecraft cabin air samples are necessary to assess the cabin air quality with respect to crew health. Both toxicology and engineering disciplines work together to achieve an acceptably clean cabin atmosphere. Toxicology is concerned with limiting the risk to crew health from chemical sources, setting exposure limits, and analyzing air samples to determine how well these limits are met. Engineering provides the means for minimizing the contribution of the various contaminant generating sources by providing active contamination control equipment on board spacecraft and adhering to a rigorous material selection and control program during the design and construction of the spacecraft. A review of the rationale and objectives for sampling spacecraft cabin atmospheres is provided. The presently-available sampling equipment and methods are reviewed along with the analytical chemistry methods employed to determine trace contaminant concentrations. These methods are compared and assessed with respect to actual cabin air quality monitoring needs. Recommendations are presented with respect to the basic sampling program necessary to ensure an acceptably clean spacecraft cabin atmosphere. Also, rationale and recommendations for expanding the scope of the basic monitoring program are discussed.
How to conduct a qualitative meta-analysis: Tailoring methods to enhance methodological integrity.
Levitt, Heidi M
2018-05-01
Although qualitative research has long been of interest in the field of psychology, meta-analyses of qualitative literatures (sometimes called meta-syntheses) are still quite rare. Like quantitative meta-analyses, these methods function to aggregate findings and identify patterns across primary studies, but their aims, procedures, and methodological considerations may vary. This paper explains the function of qualitative meta-analyses and their methodological development. Recommendations have broad relevance but are framed with an eye toward their use in psychotherapy research. Rather than arguing for the adoption of any single meta-method, this paper advocates for considering how procedures can best be selected and adapted to enhance a meta-study's methodological integrity. Through the paper, recommendations are provided to help researchers identify procedures that can best serve their studies' specific goals. Meta-analysts are encouraged to consider the methodological integrity of their studies in relation to central research processes, including identifying a set of primary research studies, transforming primary findings into initial units of data for a meta-analysis, developing categories or themes, and communicating findings. The paper provides guidance for researchers who desire to tailor meta-analytic methods to meet their particular goals while enhancing the rigor of their research.
A Theory of Material Spike Formation in Flow Separation
NASA Astrophysics Data System (ADS)
Serra, Mattia; Haller, George
2017-11-01
We develop a frame-invariant theory of material spike formation during flow separation over a no-slip boundary in two-dimensional flows with arbitrary time dependence. This theory identifies both fixed and moving separation, is effective also over short-time intervals, and admits a rigorous instantaneous limit. Our theory is based on topological properties of material lines, combining objectively stretching- and rotation-based kinematic quantities. The separation profile identified here serves as the theoretical backbone for the material spike from its birth to its fully developed shape, and remains hidden to existing approaches. Finally, our theory can be used to rigorously explain the perception of off-wall separation in unsteady flows, and more importantly, provide the conditions under which such a perception is justified. We illustrate our results in several examples including steady, time-periodic and unsteady analytic velocity fields with flat and curved boundaries, and an experimental dataset.
von Lilienfeld, O. Anatole
2013-02-26
A well-defined notion of chemical compound space (CCS) is essential for gaining rigorous control of properties through variation of elemental composition and atomic configurations. Here, we give an introduction to an atomistic first principles perspective on CCS. First, CCS is discussed in terms of variational nuclear charges in the context of conceptual density functional and molecular grand-canonical ensemble theory. Thereafter, we revisit the notion of compound pairs, related to each other via “alchemical” interpolations involving fractional nuclear charges in the electronic Hamiltonian. We address Taylor expansions in CCS, property nonlinearity, improved predictions using reference compound pairs, and the ounce-of-gold prizemore » challenge to linearize CCS. Finally, we turn to machine learning of analytical structure property relationships in CCS. Here, these relationships correspond to inferred, rather than derived through variational principle, solutions of the electronic Schrödinger equation.« less
NASA Astrophysics Data System (ADS)
Sarkar, Biplab; Adhikari, Satrajit
If a coupled three-state electronic manifold forms a sub-Hilbert space, it is possible to express the non-adiabatic coupling (NAC) elements in terms of adiabatic-diabatic transformation (ADT) angles. Consequently, we demonstrate: (a) Those explicit forms of the NAC terms satisfy the Curl conditions with non-zero Divergences; (b) The formulation of extended Born-Oppenheimer (EBO) equation for any three-state BO system is possible only when there exists coordinate independent ratio of the gradients for each pair of ADT angles leading to zero Curls at and around the conical intersection(s). With these analytic advancements, we formulate a rigorous EBO equation and explore its validity as well as necessity with respect to the approximate one (Sarkar and Adhikari, J Chem Phys 2006, 124, 074101) by performing numerical calculations on two different models constructed with different chosen forms of the NAC elements.
A survey on platforms for big data analytics.
Singh, Dilpreet; Reddy, Chandan K
The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.
Recent Advances in Durability and Damage Tolerance Methodology at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Ransom, J. B.; Glaessgen, E. H.; Raju, I. S.; Harris, C. E.
2007-01-01
Durability and damage tolerance (D&DT) issues are critical to the development of lighter, safer and more efficient aerospace vehicles. Durability is largely an economic life-cycle design consideration whereas damage tolerance directly addresses the structural airworthiness (safety) of the vehicle. Both D&DT methodologies must address the deleterious effects of changes in material properties and the initiation and growth of damage that may occur during the vehicle s service lifetime. The result of unanticipated D&DT response is often manifested in the form of catastrophic and potentially fatal accidents. As such, durability and damage tolerance requirements must be rigorously addressed for commercial transport aircraft and NASA spacecraft systems. This paper presents an overview of the recent and planned future research in durability and damage tolerance analytical and experimental methods for both metallic and composite aerospace structures at NASA Langley Research Center (LaRC).
NASA Astrophysics Data System (ADS)
Bizoń, Piotr; Chmaj, Tadeusz; Szpak, Nikodem
2011-10-01
We study dynamics near the threshold for blowup in the focusing nonlinear Klein-Gordon equation utt - uxx + u - |u|2αu = 0 on the line. Using mixed numerical and analytical methods we find that solutions starting from even initial data, fine-tuned to the threshold, are trapped by the static solution S for intermediate times. The details of trapping are shown to depend on the power α, namely, we observe fast convergence to S for α > 1, slow convergence for α = 1, and very slow (if any) convergence for 0 < α < 1. Our findings are complementary with respect to the recent rigorous analysis of the same problem (for α > 2) by Krieger, Nakanishi, and Schlag ["Global dynamics above from the ground state energy for the one-dimensional NLKG equation," preprint arXiv:1011.1776 [math.AP
Jain, Ajay N.; Chin, Koei; Børresen-Dale, Anne-Lise; Erikstein, Bjorn K.; Lonning, Per Eystein; Kaaresen, Rolf; Gray, Joe W.
2001-01-01
We present a general method for rigorously identifying correlations between variations in large-scale molecular profiles and outcomes and apply it to chromosomal comparative genomic hybridization data from a set of 52 breast tumors. We identify two loci where copy number abnormalities are correlated with poor survival outcome (gain at 8q24 and loss at 9q13). We also identify a relationship between abnormalities at two loci and the mutational status of p53. Gain at 8q24 and loss at 5q15-5q21 are linked with mutant p53. The 9q and 5q losses suggest the possibility of gene products involved in breast cancer progression. The analytical techniques are general and also are applicable to the analysis of array-based expression data. PMID:11438741
Review of FD-TD numerical modeling of electromagnetic wave scattering and radar cross section
NASA Technical Reports Server (NTRS)
Taflove, Allen; Umashankar, Korada R.
1989-01-01
Applications of the finite-difference time-domain (FD-TD) method for numerical modeling of electromagnetic wave interactions with structures are reviewed, concentrating on scattering and radar cross section (RCS). A number of two- and three-dimensional examples of FD-TD modeling of scattering and penetration are provided. The objects modeled range in nature from simple geometric shapes to extremely complex aerospace and biological systems. Rigorous analytical or experimental validatons are provided for the canonical shapes, and it is shown that FD-TD predictive data for near fields and RCS are in excellent agreement with the benchmark data. It is concluded that with continuing advances in FD-TD modeling theory for target features relevant to the RCS problems and in vector and concurrent supercomputer technology, it is likely that FD-TD numerical modeling will occupy an important place in RCS technology in the 1990s and beyond.
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
EFFECTS OF LASER RADIATION ON MATTER: Maximum depth of keyhole melting of metals by a laser beam
NASA Astrophysics Data System (ADS)
Pinsker, V. A.; Cherepanov, G. P.
1990-11-01
A calculation is reported of the maximum depth and diameter of a narrow crater formed in a stationary metal target exposed to high-power cw CO2 laser radiation. The energy needed for erosion of a unit volume is assumed to be constant and the energy losses experienced by the beam in the vapor-gas channel are ignored. The heat losses in the metal are allowed for by an analytic solution of the three-dimensional boundary-value heat-conduction problem of the temperature field in the vicinity of a thin but long crater with a constant temperature on its surface. An approximate solution of this problem by a method proposed earlier by one of the present authors was tested on a computer. The dimensions of the thin crater were found to be very different from those obtained earlier subject to a less rigorous allowance for the heat losses.
Comprehensive modeling of a liquid rocket combustion chamber
NASA Technical Reports Server (NTRS)
Liang, P.-Y.; Fisher, S.; Chang, Y. M.
1985-01-01
An analytical model for the simulation of detailed three-phase combustion flows inside a liquid rocket combustion chamber is presented. The three phases involved are: a multispecies gaseous phase, an incompressible liquid phase, and a particulate droplet phase. The gas and liquid phases are continuum described in an Eulerian fashion. A two-phase solution capability for these continuum media is obtained through a marriage of the Implicit Continuous Eulerian (ICE) technique and the fractional Volume of Fluid (VOF) free surface description method. On the other hand, the particulate phase is given a discrete treatment and described in a Lagrangian fashion. All three phases are hence treated rigorously. Semi-empirical physical models are used to describe all interphase coupling terms as well as the chemistry among gaseous components. Sample calculations using the model are given. The results show promising application to truly comprehensive modeling of complex liquid-fueled engine systems.
Bessel-Gauss beams as rigorous solutions of the Helmholtz equation.
April, Alexandre
2011-10-01
The study of the nonparaxial propagation of optical beams has received considerable attention. In particular, the so-called complex-source/sink model can be used to describe strongly focused beams near the beam waist, but this method has not yet been applied to the Bessel-Gauss (BG) beam. In this paper, the complex-source/sink solution for the nonparaxial BG beam is expressed as a superposition of nonparaxial elegant Laguerre-Gaussian beams. This provides a direct way to write the explicit expression for a tightly focused BG beam that is an exact solution of the Helmholtz equation. It reduces correctly to the paraxial BG beam, the nonparaxial Gaussian beam, and the Bessel beam in the appropriate limits. The analytical expression can be used to calculate the field of a BG beam near its waist, and it may be useful in investigating the features of BG beams under tight focusing conditions.
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
Xiao, Li; Luo, Ray
2017-12-07
We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.
Östlund, Ulrika; Kidd, Lisa; Wengström, Yvonne; Rowa-Dewar, Neneh
2011-03-01
It has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses. This review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009. In total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided. A trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory. Copyright © 2010 Elsevier Ltd. All rights reserved.
Increased scientific rigor will improve reliability of research and effectiveness of management
Sells, Sarah N.; Bassing, Sarah B.; Barker, Kristin J.; Forshee, Shannon C.; Keever, Allison; Goerz, James W.; Mitchell, Michael S.
2018-01-01
Rigorous science that produces reliable knowledge is critical to wildlife management because it increases accurate understanding of the natural world and informs management decisions effectively. Application of a rigorous scientific method based on hypothesis testing minimizes unreliable knowledge produced by research. To evaluate the prevalence of scientific rigor in wildlife research, we examined 24 issues of the Journal of Wildlife Management from August 2013 through July 2016. We found 43.9% of studies did not state or imply a priori hypotheses, which are necessary to produce reliable knowledge. We posit that this is due, at least in part, to a lack of common understanding of what rigorous science entails, how it produces more reliable knowledge than other forms of interpreting observations, and how research should be designed to maximize inferential strength and usefulness of application. Current primary literature does not provide succinct explanations of the logic behind a rigorous scientific method or readily applicable guidance for employing it, particularly in wildlife biology; we therefore synthesized an overview of the history, philosophy, and logic that define scientific rigor for biological studies. A rigorous scientific method includes 1) generating a research question from theory and prior observations, 2) developing hypotheses (i.e., plausible biological answers to the question), 3) formulating predictions (i.e., facts that must be true if the hypothesis is true), 4) designing and implementing research to collect data potentially consistent with predictions, 5) evaluating whether predictions are consistent with collected data, and 6) drawing inferences based on the evaluation. Explicitly testing a priori hypotheses reduces overall uncertainty by reducing the number of plausible biological explanations to only those that are logically well supported. Such research also draws inferences that are robust to idiosyncratic observations and unavoidable human biases. Offering only post hoc interpretations of statistical patterns (i.e., a posteriorihypotheses) adds to uncertainty because it increases the number of plausible biological explanations without determining which have the greatest support. Further, post hocinterpretations are strongly subject to human biases. Testing hypotheses maximizes the credibility of research findings, makes the strongest contributions to theory and management, and improves reproducibility of research. Management decisions based on rigorous research are most likely to result in effective conservation of wildlife resources.
Zhang, Yuxuan; Chandran, K.S. Ravi; Jagannathan, M.; ...
2016-12-05
Li-Mg alloys are promising as positive electrodes (anodes) for Li-ion batteries due to the high Li storage capacity and the relatively lower volume change during the lithiation/delithiation process. They also present a unique opportunity to image the Li distribution through the electrode thickness at various delithiation states. In this work, spatial distributions of Li in electrochemically delithiated Li-Mg alloy electrodes have been quantitatively determined using neutron tomography. Specifically, the Li concentration profiles along thickness direction are determined. A rigorous analytical model to quantify the diffusion-controlled delithiation, accompanied by phase transition and boundary movement, has also been developed to explain themore » delithiation mechanism. The analytical modeling scheme successfully predicted the Li concentration profiles which agreed well with the experimental data. It is demonstrated that during discharge Li is removed by diffusion through the solid solution Li-Mg phases and this proceeds with β→α phase transition and the associated phase boundary movement through the thickness of the electrode. This is also accompanied by electrode thinning due to the change in molar volume during delithiation. In conclusion, following the approaches developed here, one can develop a rigorous and quantitative understanding of electrochemical delithiation in electrodes of electrochemical cells, similar to that in the present Li-Mg electrodes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yuxuan; Chandran, K.S. Ravi; Jagannathan, M.
Li-Mg alloys are promising as positive electrodes (anodes) for Li-ion batteries due to the high Li storage capacity and the relatively lower volume change during the lithiation/delithiation process. They also present a unique opportunity to image the Li distribution through the electrode thickness at various delithiation states. In this work, spatial distributions of Li in electrochemically delithiated Li-Mg alloy electrodes have been quantitatively determined using neutron tomography. Specifically, the Li concentration profiles along thickness direction are determined. A rigorous analytical model to quantify the diffusion-controlled delithiation, accompanied by phase transition and boundary movement, has also been developed to explain themore » delithiation mechanism. The analytical modeling scheme successfully predicted the Li concentration profiles which agreed well with the experimental data. It is demonstrated that during discharge Li is removed by diffusion through the solid solution Li-Mg phases and this proceeds with β→α phase transition and the associated phase boundary movement through the thickness of the electrode. This is also accompanied by electrode thinning due to the change in molar volume during delithiation. In conclusion, following the approaches developed here, one can develop a rigorous and quantitative understanding of electrochemical delithiation in electrodes of electrochemical cells, similar to that in the present Li-Mg electrodes.« less
Schieschke, Nils; Di Remigio, Roberto; Frediani, Luca; Heuser, Johannes; Höfener, Sebastian
2017-07-15
We present the explicit derivation of an approach to the multiscale description of molecules in complex environments that combines frozen-density embedding (FDE) with continuum solvation models, in particular the conductor-like screening model (COSMO). FDE provides an explicit atomistic description of molecule-environment interactions at reduced computational cost, while the outer continuum layer accounts for the effect of long-range isotropic electrostatic interactions. Our treatment is based on a variational Lagrangian framework, enabling rigorous derivations of ground- and excited-state response properties. As an example of the flexibility of the theoretical framework, we derive and discuss FDE + COSMO analytical molecular gradients for excited states within the Tamm-Dancoff approximation (TDA) and for ground states within second-order Møller-Plesset perturbation theory (MP2) and a second-order approximate coupled cluster with singles and doubles (CC2). It is shown how this method can be used to describe vertical electronic excitation (VEE) energies and Stokes shifts for uracil in water and carbostyril in dimethyl sulfoxide (DMSO), respectively. In addition, VEEs for some simplified protein models are computed, illustrating the performance of this method when applied to larger systems. The interaction terms between the FDE subsystem densities and the continuum can influence excitation energies up to 0.3 eV and, thus, cannot be neglected for general applications. We find that the net influence of the continuum in presence of the first FDE shell on the excitation energy amounts to about 0.05 eV for the cases investigated. The present work is an important step toward rigorously derived ab initio multilayer and multiscale modeling approaches. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Pecchia, Leandro; Martin, Jennifer L; Ragozzino, Angela; Vanzanella, Carmela; Scognamiglio, Arturo; Mirarchi, Luciano; Morgan, Stephen P
2013-01-05
The rigorous elicitation of user needs is a crucial step for both medical device design and purchasing. However, user needs elicitation is often based on qualitative methods whose findings can be difficult to integrate into medical decision-making. This paper describes the application of AHP to elicit user needs for a new CT scanner for use in a public hospital. AHP was used to design a hierarchy of 12 needs for a new CT scanner, grouped into 4 homogenous categories, and to prepare a paper questionnaire to investigate the relative priorities of these. The questionnaire was completed by 5 senior clinicians working in a variety of clinical specialisations and departments in the same Italian public hospital. Although safety and performance were considered the most important issues, user needs changed according to clinical scenario. For elective surgery, the five most important needs were: spatial resolution, processing software, radiation dose, patient monitoring, and contrast medium. For emergency, the top five most important needs were: patient monitoring, radiation dose, contrast medium control, speed run, spatial resolution. AHP effectively supported user need elicitation, helping to develop an analytic and intelligible framework of decision-making. User needs varied according to working scenario (elective versus emergency medicine) more than clinical specialization. This method should be considered by practitioners involved in decisions about new medical technology, whether that be during device design or before deciding whether to allocate budgets for new medical devices according to clinical functions or according to hospital department.
Erikson, U; Misimi, E
2008-03-01
The changes in skin and fillet color of anesthetized and exhausted Atlantic salmon were determined immediately after killing, during rigor mortis, and after ice storage for 7 d. Skin color (CIE L*, a*, b*, and related values) was determined by a Minolta Chroma Meter. Roche SalmoFan Lineal and Roche Color Card values were determined by a computer vision method and a sensory panel. Before color assessment, the stress levels of the 2 fish groups were characterized in terms of white muscle parameters (pH, rigor mortis, and core temperature). The results showed that perimortem handling stress initially significantly affected several color parameters of skin and fillets. Significant transient fillet color changes also occurred in the prerigor phase and during the development of rigor mortis. Our results suggested that fillet color was affected by postmortem glycolysis (pH drop, particularly in anesthetized fillets), then by onset and development of rigor mortis. The color change patterns during storage were different for the 2 groups of fish. The computer vision method was considered suitable for automated (online) quality control and grading of salmonid fillets according to color.
Exact solution of corner-modified banded block-Toeplitz eigensystems
NASA Astrophysics Data System (ADS)
Cobanera, Emilio; Alase, Abhijeet; Ortiz, Gerardo; Viola, Lorenza
2017-05-01
Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified. Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz, independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix, whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev.
Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke; Palsbøll, Per J
2012-09-06
Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments.
2012-01-01
Background Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Results Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Conclusion Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments. PMID:22954451
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Rachel M; Tfaily, Malak M
These data are provided in support of the Commentary, Advanced molecular techniques provide a rigorous method for characterizing organic matter quality in complex systems, Wilson and Tfaily (2018). Measurement results demonstrate that optical characterization of peatland dissolved organic matter (DOM) may not fully capture classically identified chemical characteristics and may, therefore, not be the best measure of organic matter quality.
Volume Holograms in Photopolymers: Comparison between Analytical and Rigorous Theories
Gallego, Sergi; Neipp, Cristian; Estepa, Luis A.; Ortuño, Manuel; Márquez, Andrés; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto
2012-01-01
There is no doubt that the concept of volume holography has led to an incredibly great amount of scientific research and technological applications. One of these applications is the use of volume holograms as optical memories, and in particular, the use of a photosensitive medium like a photopolymeric material to record information in all its volume. In this work we analyze the applicability of Kogelnik’s Coupled Wave theory to the study of volume holograms recorded in photopolymers. Some of the theoretical models in the literature describing the mechanism of hologram formation in photopolymer materials use Kogelnik’s theory to analyze the gratings recorded in photopolymeric materials. If Kogelnik’s theory cannot be applied is necessary to use a more general Coupled Wave theory (CW) or the Rigorous Coupled Wave theory (RCW). The RCW does not incorporate any approximation and thus, since it is rigorous, permits judging the accurateness of the approximations included in Kogelnik’s and CW theories. In this article, a comparison between the predictions of the three theories for phase transmission diffraction gratings is carried out. We have demonstrated the agreement in the prediction of CW and RCW and the validity of Kogelnik’s theory only for gratings with spatial frequencies higher than 500 lines/mm for the usual values of the refractive index modulations obtained in photopolymers.
Determination of methyl bromide in air samples by headspace gas chromatography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodrow, J.E.; McChesney, M.M.; Seiber, J.N.
1988-03-01
Methyl bromide is extensively used in agriculture (4 x 10/sup 6/ kg for 1985 in California alone as a fumigant to control nematodes, weeds, and fungi in soil and insect pests in harvested grains and nuts. Given its low boiling point (3.8/sup 0/C) and high vapor pressure (approx. 1400 Torr at 20/sup 0/C), methyl bromide will readily diffuse if not rigorously contained. Methods for determining methyl bromide and other halocarbons in air vary widely. A common practice is to trap the material from air on an adsorbent, such as polymeric resins, followed by thermal desorption either directly into the analyticalmore » instrumentation or after intermediary cryofocusing. While in some cases analytical detection limits were reasonable (parts per million range), many of the published methods were labor intensive and required special handling techniques that precluded high sample throughput. They describe here a method for the sampling and analysis of airborne methyl bromide that was designed to handle large numbers of samples through automating some critical steps of the analysis. The result was a method that allowed around-the-clock operation with a minimum of operator attention. Furthermore, the method was not specific to methyl bromide and could be used to determine other halocarbons in air.« less
Reference Materials: Critical Importance to the Infant Formula Industry.
Wargo, Wayne F
2017-09-01
Infant formula is one of the most regulated foods in the world. It has advanced in complexity over the years as a result of numerous research innovations. To ensure product safety and quality, analytical technologies have also had to advance to keep pace. Given the rigorous performance demands expected of these methods and the ever-growing array of complex matrixes, there is the potential for gaps to exist in current Official MethodsSM and other recognized international methods for infant formula and adult nutritionals. Food safety concerns, particularly for infants, drive the need for extensive testing by manufacturers and regulators. The net effect is the potential for an increase in time- and resource-consuming regulatory disputes. In an effort to mitigate such costly activities, AOAC INTERNATIONAL, under the direction of the Infant Formula Council of America-a trade association of manufacturers and marketers of formulated nutritional products-agreed to establish voluntary consensus Standard Method Performance Requirements, and, ultimately, to identify and publish globally recognized, fit-for-purpose standard methods. To accomplish this task, nutritional reference materials (RMs), representing all major commercially available nutritional formulations, were (and continue to be) a critical necessity. In this paper, various types of RMs will be defined, followed by review and discussion of their importance to the infant formula industry.
Two-port connecting-layer-based sandwiched grating by a polarization-independent design.
Li, Hongtao; Wang, Bo
2017-05-02
In this paper, a two-port connecting-layer-based sandwiched beam splitter grating with polarization-independent property is reported and designed. Such the grating can separate the transmission polarized light into two diffraction orders with equal energies, which can realize the nearly 50/50 output with good uniformity. For the given wavelength of 800 nm and period of 780 nm, a simplified modal method can design a optimal duty cycle and the estimation value of the grating depth can be calculated based on it. In order to obtain the precise grating parameters, a rigorous coupled-wave analysis can be employed to optimize grating parameters by seeking for the precise grating depth and the thickness of connecting layer. Based on the optimized design, a high-efficiency two-port output grating with the wideband performances can be gained. Even more important, diffraction efficiencies are calculated by using two analytical methods, which are proved to be coincided well with each other. Therefore, the grating is significant for practical optical photonic element in engineering.
Percy, Andrew J; Yang, Juncong; Hardie, Darryl B; Chambers, Andrew G; Tamura-Wells, Jessica; Borchers, Christoph H
2015-06-15
Spurred on by the growing demand for panels of validated disease biomarkers, increasing efforts have focused on advancing qualitative and quantitative tools for more highly multiplexed and sensitive analyses of a multitude of analytes in various human biofluids. In quantitative proteomics, evolving strategies involve the use of the targeted multiple reaction monitoring (MRM) mode of mass spectrometry (MS) with stable isotope-labeled standards (SIS) used for internal normalization. Using that preferred approach with non-invasive urine samples, we have systematically advanced and rigorously assessed the methodology toward the precise quantitation of the largest, multiplexed panel of candidate protein biomarkers in human urine to date. The concentrations of the 136 proteins span >5 orders of magnitude (from 8.6 μg/mL to 25 pg/mL), with average CVs of 8.6% over process triplicate. Detailed here is our quantitative method, the analysis strategy, a feasibility application to prostate cancer samples, and a discussion of the utility of this method in translational studies. Copyright © 2015 Elsevier Inc. All rights reserved.
The contour-buildup algorithm to calculate the analytical molecular surface.
Totrov, M; Abagyan, R
1996-01-01
A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.
Serina, Florent
2018-06-09
If his relationship to Sigmund Freud's psychoanalysis has been widely acknowledged, Claude Lévi-Strauss' rapport with C.G. Jung's analytical psychology remains quite obscure. While secondary commentary has been abundant, its approach has above all been intertextual, to the detriment of a rigorously historical reading. Even if certain arguments put forward by supporters of so-called "influence" deserve to be taken into account, especially because they highlight Lévi-Strauss ambiguities and paradoxes toward Jung, this paper provides proof that a precise reading of the texts, with the help of recent studies on the intellectual genesis of Lévi-Strauss, lead to reject the thesis of an unstated debt owed by the French anthropologist to the Zurich psychologist. © 2018 Wiley Periodicals, Inc.
The Modern Design of Experiments for Configuration Aerodynamics: A Case Study
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2006-01-01
The effects of slowly varying and persisting covariate effects on the accuracy and precision of experimental result is reviewed, as is the rationale for run-order randomization as a quality assurance tactic employed in the Modern Design of Experiments (MDOE) to defend against such effects. Considerable analytical complexity is introduced by restrictions on randomization in configuration aerodynamics tests because they involve hard-to-change configuration variables that cannot be randomized conveniently. Tradeoffs are examined between quality and productivity associated with varying degrees of rigor in accounting for such randomization restrictions. Certain characteristics of a configuration aerodynamics test are considered that may justify a relaxed accounting for randomization restrictions to achieve a significant reduction in analytical complexity with a comparably negligible adverse impact on the validity of the experimental results.
Value of the distant future: Model-independent results
NASA Astrophysics Data System (ADS)
Katz, Yuri A.
2017-01-01
This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.
Iterative categorization (IC): a systematic technique for analysing qualitative data.
Neale, Joanne
2016-06-01
The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Wearable Networked Sensing for Human Mobility and Activity Analytics: A Systems Study.
Dong, Bo; Biswas, Subir
2012-01-01
This paper presents implementation details, system characterization, and the performance of a wearable sensor network that was designed for human activity analysis. Specific machine learning mechanisms are implemented for recognizing a target set of activities with both out-of-body and on-body processing arrangements. Impacts of energy consumption by the on-body sensors are analyzed in terms of activity detection accuracy for out-of-body processing. Impacts of limited processing abilities in the on-body scenario are also characterized in terms of detection accuracy, by varying the background processing load in the sensor units. Through a rigorous systems study, it is shown that an efficient human activity analytics system can be designed and operated even under energy and processing constraints of tiny on-body wearable sensors.
Ossikovski, Razvigor; Arteaga, Oriol; Vizet, Jérémy; Garcia-Caurel, Enric
2017-08-01
We show, both analytically and experimentally, that under common experimental conditions the interference pattern produced in a classic Young's double-slit experiment is indistinguishable from that generated by means of a doubly refracting uniaxial crystal whose optic axis makes a skew angle with the light propagation direction. The equivalence between diffraction and crystal optics interference experiments, taken for granted by Arago and Fresnel in their pioneering research on the interference of polarized light beams, is thus rigorously proven.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Invariant Tori in the Secular Motions of the Three-body Planetary Systems
NASA Astrophysics Data System (ADS)
Locatelli, Ugo; Giorgilli, Antonio
We consider the problem of the applicability of KAM theorem to a realistic problem of three bodies. In the framework of the averaged dynamics over the fast angles for the Sun-Jupiter-Saturn system we can prove the perpetual stability of the orbit. The proof is based on semi-numerical algorithms requiring both explicit algebraic manipulations of series and analytical estimates. The proof is made rigorous by using interval arithmetics in order to control the numerical errors.
On making cuts for magnetic scalar potentials in multiply connected regions
NASA Astrophysics Data System (ADS)
Kotiuga, P. R.
1987-04-01
The problem of making cuts is of importance to scalar potential formulations of three-dimensional eddy current problems. Its heuristic solution has been known for a century [J. C. Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. (Clarendon, Oxford, 1981), Chap. 1, Article 20] and in the last decade, with the use of finite element methods, a restricted combinatorial variant has been proposed and solved [M. L. Brown, Int. J. Numer. Methods Eng. 20, 665 (1984)]. This problem, in its full generality, has never received a rigorous mathematical formulation. This paper presents such a formulation and outlines a rigorous proof of existence. The technique used in the proof expose the incredible intricacy of the general problem and the restrictive assumptions of Brown [Int. J. Numer. Methods Eng. 20, 665 (1984)]. Finally, the results make rigorous Kotiuga's (Ph. D. Thesis, McGill University, Montreal, 1984) heuristic interpretation of cuts and duality theorems via intersection matrices.
NASA Astrophysics Data System (ADS)
Lepper, Kenneth Errol
Scope and method of study. Part I: In its simplest expression a luminescence age is the natural absorbed radiation dose (De) divided by the in-situ dose rate. The experimental techniques of Optically Stimulated Luminescence (OSL) dating have evolved to the point were hundreds of Des, and therefore depositional ages can be quickly and conveniently determined for a single sediment sample. The first major objective of this research was to develop an objective analysis method for analyzing dose distribution data and selecting an age-representative dose (Dp). The analytical method was developed based on dose data sets collected from 3 eolian and 3 fluvial sediment samples from Central Oklahoma. Findings and conclusions. Part I: An objective method of presenting the dose distribution data, and a mathematically rigorous means of determining the Dp, as well as a statistically meaningful definition of the uncertainty in Dp have been proposed. The concept of experimental error deconvolution was introduced. In addition a set of distribution shape parameters to facilitate comparison among samples have been defined. These analytical techniques hold the potential to greatly enhance the accuracy and utility of OSL dating for young fluvial sediments. Scope and method of study. Part II: The second major objective of this research was to propose the application of luminescence dating to sediments on Mars. A set of fundamental luminescence dating properties was evaluated for a martian surface materials analog and a polar deposit contextual analog. Findings and conclusions. Part II: The luminescence signals measured from the analogs were found to have a wide dynamic dose response range with no unusual or prohibitive short-term instabilities and were readily reset by exposure to sunlight. These properties form a stable base for continued investigations toward the development of luminescence dating instruments and procedures for Mars.
The impact of rigorous mathematical thinking as learning method toward geometry understanding
NASA Astrophysics Data System (ADS)
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-05-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding. RMT is a unique realization of the cognitive conceptual construction approach based on Mediated Learning Experience (MLE) theory by Feurstein and Vygotsky’s sociocultural theory. This was quasi experimental research which was comparing the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning method toward conceptual understanding of Junior High School students. The data was analyzed by using Independent t-test and obtained a significant difference of mean value between experimental and control class on geometry conceptual understanding. Further, by semi-structure interview known that students taught by RMT had deeper conceptual understanding than students who were taught by conventional way. By these result known that Rigorous Mathematical Thinking (RMT) as learning method have positive impact toward Geometry conceptual understanding.
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis
2018-03-01
The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.
From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data
NASA Astrophysics Data System (ADS)
Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.
2007-11-01
Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.
Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms.
Jaffe, Jacob D; Feeney, Caitlin M; Patel, Jinal; Lu, Xiaodong; Mani, D R
2016-11-01
Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques. Graphical Abstract ᅟ.
Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Jaffe, Jacob D.; Feeney, Caitlin M.; Patel, Jinal; Lu, Xiaodong; Mani, D. R.
2016-11-01
Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques.
Systemic Planning: An Annotated Bibliography and Literature Guide. Exchange Bibliography No. 91.
ERIC Educational Resources Information Center
Catanese, Anthony James
Systemic planning is an operational approach to using scientific rigor and qualitative judgment in a complementary manner. It integrates rigorous techniques and methods from systems analysis, cybernetics, decision theory, and work programing. The annotated reference sources in this bibliography include those works that have been most influential…
Estimation of the breaking of rigor mortis by myotonometry.
Vain, A; Kauppila, R; Vuori, E
1996-05-31
Myotonometry was used to detect breaking of rigor mortis. The myotonometer is a new instrument which measures the decaying oscillations of a muscle after a brief mechanical impact. The method gives two numerical parameters for rigor mortis, namely the period and decrement of the oscillations, both of which depend on the time period elapsed after death. In the case of breaking the rigor mortis by muscle lengthening, both the oscillation period and decrement decreased, whereas, shortening the muscle caused the opposite changes. Fourteen h after breaking the stiffness characteristics of the right and left m. biceps brachii, or oscillation periods, were assimilated. However, the values for decrement of the muscle, reflecting the dissipation of mechanical energy, maintained their differences.
Enabling quaternion derivatives: the generalized HR calculus
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C.; Mandic, Danilo P.
2015-01-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis. PMID:26361555
Enabling quaternion derivatives: the generalized HR calculus.
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C; Mandic, Danilo P
2015-08-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis.
Quality-control materials in the USDA National Food and Nutrient Analysis Program (NFNAP).
Phillips, Katherine M; Patterson, Kristine Y; Rasor, Amy S; Exler, Jacob; Haytowitz, David B; Holden, Joanne M; Pehrsson, Pamela R
2006-03-01
The US Department of Agriculture (USDA) Nutrient Data Laboratory (NDL) develops and maintains the USDA National Nutrient Databank System (NDBS). Data are released from the NDBS for scientific and public use through the USDA National Nutrient Database for Standard Reference (SR) ( http://www.ars.usda.gov/ba/bhnrc/ndl ). In 1997 the NDL initiated the National Food and Nutrient Analysis Program (NFNAP) to update and expand its food-composition data. The program included: 1) nationwide probability-based sampling of foods; 2) central processing and archiving of food samples; 3) analysis of food components at commercial, government, and university laboratories; 4) incorporation of new analytical data into the NDBS; and 5) dissemination of these data to the scientific community. A key feature and strength of the NFNAP was a rigorous quality-control program that enabled independent verification of the accuracy and precision of analytical results. Custom-made food-control composites and/or commercially available certified reference materials were sent to the laboratories, blinded, with the samples. Data for these materials were essential to ongoing monitoring of analytical work, to identify and resolve suspected analytical problems, to ensure the accuracy and precision of results for the NFNAP food samples.
Realizing Scientific Methods for Cyber Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, Thomas E.; Manz, David O.; Edgar, Thomas W.
There is little doubt among cyber security researchers about the lack of scientic rigor that underlies much of the liter-ature. The issues are manifold and are well documented. Further complicating the problem is insufficient scientic methods to address these issues. Cyber security melds man and machine: we inherit the challenges of computer science, sociology, psychology, and many other elds and create new ones where these elds interface. In this paper we detail a partial list of challenges imposed by rigorous science and survey how other sciences have tackled them, in the hope of applying a similar approach to cyber securitymore » science. This paper is by no means comprehensive: its purpose is to foster discussion in the community on how we can improve rigor in cyber security science.« less
Chan, Leighton; Heinemann, Allen W; Roberts, Jason
2014-01-01
Note from the AJOT Editor-in-Chief: Since 2010, the American Journal of Occupational Therapy (AJOT) has adopted reporting standards based on the Consolidated Standards of Reporting Trials (CONSORT) Statement and American Psychological Association (APA) guidelines in an effort to publish transparent clinical research that can be easily evaluated for methodological and analytical rigor (APA Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008; Moher, Schulz, & Altman, 2001). AJOT has now joined 28 other major rehabilitation and disability journals in a collaborative initiative to enhance clinical research reporting standards through adoption of the EQUATOR Network reporting guidelines, described below. Authors will now be required to use these guidelines in the preparation of manuscripts that will be submitted to AJOT. Reviewers will also use these guidelines to evaluate the quality and rigor of all AJOT submissions. By adopting these standards we hope to further enhance the quality and clinical applicability of articles to our readers. Copyright © 2014 by the American Occupational Therapy Association, Inc.
Nodal surfaces and interdimensional degeneracies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loos, Pierre-François, E-mail: pf.loos@anu.edu.au; Bressanini, Dario, E-mail: dario.bressanini@uninsubria.it
2015-06-07
The aim of this paper is to shed light on the topology and properties of the nodes (i.e., the zeros of the wave function) in electronic systems. Using the “electrons on a sphere” model, we study the nodes of two-, three-, and four-electron systems in various ferromagnetic configurations (sp, p{sup 2}, sd, pd, p{sup 3}, sp{sup 2}, and sp{sup 3}). In some particular cases (sp, p{sup 2}, sd, pd, and p{sup 3}), we rigorously prove that the non-interacting wave function has the same nodes as the exact (yet unknown) wave function. The number of atomic and molecular systems for whichmore » the exact nodes are known analytically is very limited and we show here that this peculiar feature can be attributed to interdimensional degeneracies. Although we have not been able to prove it rigorously, we conjecture that the nodes of the non-interacting wave function for the sp{sup 3} configuration are exact.« less
Including Magnetostriction in Micromagnetic Models
NASA Astrophysics Data System (ADS)
Conbhuí, Pádraig Ó.; Williams, Wyn; Fabian, Karl; Nagy, Lesleis
2016-04-01
The magnetic anomalies that identify crustal spreading are predominantly recorded by basalts formed at the mid-ocean ridges, whose magnetic signals are dominated by iron-titanium-oxides (Fe3-xTixO4), so called "titanomagnetites", of which the Fe2.4Ti0.6O4 (TM60) phase is the most common. With sufficient quantities of titanium present, these minerals exhibit strong magnetostriction. To date, models of these grains in the pseudo-single domain (PSD) range have failed to accurately account for this effect. In particular, a popular analytic treatment provided by Kittel (1949) for describing the magnetostrictive energy as an effective increase of the anisotropy constant can produce unphysical strains for non-uniform magnetizations. I will present a rigorous approach based on work by Brown (1966) and by Kroner (1958) for including magnetostriction in micromagnetic codes which is suitable for modelling hysteresis loops and finding remanent states in the PSD regime. Preliminary results suggest the more rigorously defined micromagnetic models exhibit higher coercivities and extended single domain ranges when compared to more simplistic approaches.
Stability of Viscous St. Venant Roll Waves: From Onset to Infinite Froude Number Limit
NASA Astrophysics Data System (ADS)
Barker, Blake; Johnson, Mathew A.; Noble, Pascal; Rodrigues, L. Miguel; Zumbrun, Kevin
2017-02-01
We study the spectral stability of roll wave solutions of the viscous St. Venant equations modeling inclined shallow water flow, both at onset in the small Froude number or "weakly unstable" limit F→ 2^+ and for general values of the Froude number F, including the limit F→ +∞ . In the former, F→ 2^+, limit, the shallow water equations are formally approximated by a Korteweg-de Vries/Kuramoto-Sivashinsky (KdV-KS) equation that is a singular perturbation of the standard Korteweg-de Vries (KdV) equation modeling horizontal shallow water flow. Our main analytical result is to rigorously validate this formal limit, showing that stability as F→ 2^+ is equivalent to stability of the corresponding KdV-KS waves in the KdV limit. Together with recent results obtained for KdV-KS by Johnson-Noble-Rodrigues-Zumbrun and Barker, this gives not only the first rigorous verification of stability for any single viscous St. Venant roll wave, but a complete classification of stability in the weakly unstable limit. In the remainder of the paper, we investigate numerically and analytically the evolution of the stability diagram as Froude number increases to infinity. Notably, we find transition at around F=2.3 from weakly unstable to different, large- F behavior, with stability determined by simple power-law relations. The latter stability criteria are potentially useful in hydraulic engineering applications, for which typically 2.5≤ F≤ 6.0.
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Bedner, Mary
2017-06-01
Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Analytical formulation of lunar cratering asymmetries
NASA Astrophysics Data System (ADS)
Wang, Nan; Zhou, Ji-Lin
2016-10-01
Context. The cratering asymmetry of a bombarded satellite is related to both its orbit and impactors. The inner solar system impactor populations, that is, the main-belt asteroids (MBAs) and the near-Earth objects (NEOs), have dominated during the late heavy bombardment (LHB) and ever since, respectively. Aims: We formulate the lunar cratering distribution and verify the cratering asymmetries generated by the MBAs as well as the NEOs. Methods: Based on a planar model that excludes the terrestrial and lunar gravitations on the impactors and assuming the impactor encounter speed with Earth venc is higher than the lunar orbital speed vM, we rigorously integrated the lunar cratering distribution, and derived its approximation to the first order of vM/venc. Numerical simulations of lunar bombardment by the MBAs during the LHB were performed with an Earth-Moon distance aM = 20-60 Earth radii in five cases. Results: The analytical model directly proves the existence of a leading/trailing asymmetry and the absence of near/far asymmetry. The approximate form of the leading/trailing asymmetry is (1 + A1cosβ), which decreases as the apex distance β increases. The numerical simulations show evidence of a pole/equator asymmetry as well as the leading/trailing asymmetry, and the former is empirically described as (1 + A2cos2ϕ), which decreases as the latitude modulus | ϕ | increases. The amplitudes A1,2 are reliable measurements of asymmetries. Our analysis explicitly indicates the quantitative relations between cratering distribution and bombardment conditions (impactor properties and the lunar orbital status) like A1 ∝ vM/venc, resulting in a method for reproducing the bombardment conditions through measuring the asymmetry. Mutual confirmation between analytical model and numerical simulations is found in terms of the cratering distribution and its variation with aM. Estimates of A1 for crater density distributions generated by the MBAs and the NEOs are 0.101-0.159 and 0.117, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L
Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less
SDN solutions for switching dedicated long-haul connections: Measurements and comparative analysis
Rao, Nageswara S. V.
2016-01-01
We consider a scenario of two sites connected over a dedicated, long-haul connection that must quickly fail-over in response to degradations in host-to-host application performance. The traditional layer-2/3 hot stand-by fail-over solutions do not adequately address the variety of application degradations, and more recent single controller Software Defined Networks (SDN) solutions are not effective for long-haul connections. We present two methods for such a path fail-over using OpenFlow enabled switches: (a) a light-weight method that utilizes host scripts to monitor application performance and dpctl API for switching, and (b) a generic method that uses two OpenDaylight (ODL) controllers and RESTmore » interfaces. For both methods, the restoration dynamics of applications contain significant statistical variations due to the complexities of controllers, north bound interfaces and switches; they, together with the wide variety of vendor implementations, complicate the choice among such solutions. We develop the impulse-response method based on regression functions of performance parameters to provide a rigorous and objective comparison of different solutions. We describe testing results of the two proposed methods, using TCP throughput and connection rtt as main parameters, over a testbed consisting of HP and Cisco switches connected over longhaul connections emulated in hardware by ANUE devices. Lastly, the combination of analytical and experimental results demonstrate that the dpctl method responds seconds faster than the ODL method on average, even though both methods eventually restore original TCP throughput.« less
Treetrimmer: a method for phylogenetic dataset size reduction.
Maruyama, Shinichiro; Eveleigh, Robert J M; Archibald, John M
2013-04-12
With rapid advances in genome sequencing and bioinformatics, it is now possible to generate phylogenetic trees containing thousands of operational taxonomic units (OTUs) from a wide range of organisms. However, use of rigorous tree-building methods on such large datasets is prohibitive and manual 'pruning' of sequence alignments is time consuming and raises concerns over reproducibility. There is a need for bioinformatic tools with which to objectively carry out such pruning procedures. Here we present 'TreeTrimmer', a bioinformatics procedure that removes unnecessary redundancy in large phylogenetic datasets, alleviating the size effect on more rigorous downstream analyses. The method identifies and removes user-defined 'redundant' sequences, e.g., orthologous sequences from closely related organisms and 'recently' evolved lineage-specific paralogs. Representative OTUs are retained for more rigorous re-analysis. TreeTrimmer reduces the OTU density of phylogenetic trees without sacrificing taxonomic diversity while retaining the original tree topology, thereby speeding up downstream computer-intensive analyses, e.g., Bayesian and maximum likelihood tree reconstructions, in a reproducible fashion.
Weuve, Jennifer; Proust-Lima, Cécile; Power, Melinda C.; Gross, Alden L.; Hofer, Scott M.; Thiébaut, Rodolphe; Chêne, Geneviève; Glymour, M. Maria; Dufouil, Carole
2015-01-01
Clinical and population research on dementia and related neurologic conditions, including Alzheimer’s disease, faces several unique methodological challenges. Progress to identify preventive and therapeutic strategies rests on valid and rigorous analytic approaches, but the research literature reflects little consensus on “best practices.” We present findings from a large scientific working group on research methods for clinical and population studies of dementia, which identified five categories of methodological challenges as follows: (1) attrition/sample selection, including selective survival; (2) measurement, including uncertainty in diagnostic criteria, measurement error in neuropsychological assessments, and practice or retest effects; (3) specification of longitudinal models when participants are followed for months, years, or even decades; (4) time-varying measurements; and (5) high-dimensional data. We explain why each challenge is important in dementia research and how it could compromise the translation of research findings into effective prevention or care strategies. We advance a checklist of potential sources of bias that should be routinely addressed when reporting dementia research. PMID:26397878
Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei; ...
2017-03-02
A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less
Statistical properties of multi-theta polymer chains
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2018-04-01
We study statistical properties of polymer chains with complex structures whose chemical connectivities are expressed by graphs. The multi-theta curve of m subchains with two branch points connected by them is one of the simplest graphs among those graphs having closed paths, i.e. loops. We denoted it by θm , and for m = 2 it is given by a ring. We derive analytically the pair distribution function and the scattering function for the θm -shaped polymer chains consisting of m Gaussian random walks of n steps. Surprisingly, it is shown rigorously that the mean-square radius of gyration for the Gaussian θm -shaped polymer chain does not depend on the number m of subchains if each subchain has the same fixed number of steps. For m = 3 we show the Kratky plot for the theta-shaped polymer chain consisting of hard cylindrical segments by the Monte-Carlo method including reflection at trivalent vertices.
Determining if disease management saves money: an introduction to meta-analysis.
Linden, Ariel; Adams, John L
2007-06-01
Disease management (DM) programmes have long been promoted as a major medical cost-saving mechanism, even though the scant research that exists on the topic has provided conflicting results. In a 2004 literature review, the Congressional Budget Office stated that 'there is insufficient evidence to conclude that disease management programs can generally reduce the overall cost of health care services'. To address this question more accurately, a meta-analysis was warranted. Meta-analysis is the quantitative technique used to pool the results of many studies on the same topic and summarize them statistically. This method is also quite suitable for individual DM firms to assess whether their programmes are effective at the aggregate level. This paper describes the elements of a rigorous meta-analytic process and discusses potential biases. A hypothetical DM organization is then evaluated with a specific emphasis on medical cost-savings, simulating a case in which different populations are served, evaluation methodologies are employed, and diseases are managed.
Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.
Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T
2013-12-06
The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei
A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less
Quality of herbal medicines: challenges and solutions.
Zhang, Junhua; Wider, Barbara; Shang, Hongcai; Li, Xuemei; Ernst, Edzard
2012-01-01
The popularity of herbal medicines has risen worldwide. This increase in usage renders safety issues important. Many adverse events of herbal medicines can be attributed to the poor quality of the raw materials or the finished products. Different types of herbal medicines are associated with different problems. Quality issues of herbal medicines can be classified into two categories: external and internal. In this review, external issues including contamination (e.g. toxic metals, pesticides residues and microbes), adulteration and misidentification are detailed. Complexity and non-uniformity of the ingredients in herbal medicines are the internal issues affecting the quality of herbal medicines. Solutions to the raised problems are discussed. The rigorous implementation of Good Agricultural and Collection Practices (GACP) and Good Manufacturing Practices (GMP) would undoubtedly reduce the risk of external issues. Through the use of modern analytical methods and pharmaceutical techniques, previously unsolved internal issues have become solvable. Standard herbal products can be manufactured from the standard herbal extracts. Copyright © 2011 Elsevier Ltd. All rights reserved.
Studies on the coupling transformer to improve the performance of microwave ion source.
Misra, Anuraag; Pandit, V S
2014-06-01
A 2.45 GHz microwave ion source has been developed and installed at the Variable Energy Cyclotron Centre to produce high intensity proton beam. It is operational and has already produced more than 12 mA of proton beam with just 350 W of microwave power. In order to optimize the coupling of microwave power to the plasma, a maximally flat matching transformer has been used. In this paper, we first describe an analytical method to design the matching transformer and then present the results of rigorous simulation performed using ANSYS HFSS code to understand the effect of different parameters on the transformed impedance and reflection and transmission coefficients. Based on the simulation results, we have chosen two different coupling transformers which are double ridged waveguides with ridge widths of 24 mm and 48 mm. We have fabricated these transformers and performed experiments to study the influence of these transformers on the coupling of microwave to plasma and extracted beam current from the ion source.
Studies on the coupling transformer to improve the performance of microwave ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Misra, Anuraag, E-mail: pandit@vecc.gov.in, E-mail: vspandit12@gmail.com, E-mail: anuraag@vecc.gov.in; Pandit, V. S., E-mail: pandit@vecc.gov.in, E-mail: vspandit12@gmail.com, E-mail: anuraag@vecc.gov.in
A 2.45 GHz microwave ion source has been developed and installed at the Variable Energy Cyclotron Centre to produce high intensity proton beam. It is operational and has already produced more than 12 mA of proton beam with just 350 W of microwave power. In order to optimize the coupling of microwave power to the plasma, a maximally flat matching transformer has been used. In this paper, we first describe an analytical method to design the matching transformer and then present the results of rigorous simulation performed using ANSYS HFSS code to understand the effect of different parameters on themore » transformed impedance and reflection and transmission coefficients. Based on the simulation results, we have chosen two different coupling transformers which are double ridged waveguides with ridge widths of 24 mm and 48 mm. We have fabricated these transformers and performed experiments to study the influence of these transformers on the coupling of microwave to plasma and extracted beam current from the ion source.« less
Respect for cultural diversity and the empirical turn in bioethics: a plea for caution.
Mbugua, Karori
2012-01-01
In the last two decades, there have been numerous calls for a culturally sensitive bioethics. At the same time, bioethicists have become increasingly involved in empirical research, which is a sign of dissatisfaction with the analytic methods of traditional bioethics. In this article, I will argue that although these developments have broadened and enriched the field of bioethics, they can easily be construed to be an endorsement of ethical relativism, especially by those not well grounded in academic moral philosophy. I maintain that bioethicists must resist the temptation of moving too quickly from cultural relativism to ethical relativism and from empirical findings to normative conclusions. Indeed, anyone who reasons in this way is guilty of the naturalistic fallacy. I conclude by saying that properly conceptualized, empirical research and sensitivity to cultural diversity should give rise to objective rational discourse and criticism and not indiscriminate tolerance of every possible moral practice. Bioethics must remain a normative discipline that is characterized by rigorous argumentation.
O’Brien, Katherine L.; Deloria-Knoll, Maria; Murdoch, David R.; Feikin, Daniel R.; DeLuca, Andrea N.; Driscoll, Amanda J.; Baggett, Henry C.; Brooks, W. Abdullah; Howie, Stephen R. C.; Kotloff, Karen L.; Madhi, Shabir A.; Maloney, Susan A.; Sow, Samba; Thea, Donald M.; Scott, J. Anthony
2012-01-01
The Pneumonia Etiology Research for Child Health (PERCH) project is a 7-country, standardized, comprehensive evaluation of the etiologic agents causing severe pneumonia in children from developing countries. During previous etiology studies, between one-quarter and one-third of patients failed to yield an obvious etiology; PERCH will employ and evaluate previously unavailable innovative, more sensitive diagnostic techniques. Innovative and rigorous epidemiologic and analytic methods will be used to establish the causal association between presence of potential pathogens and pneumonia. By strategic selection of study sites that are broadly representative of regions with the greatest burden of childhood pneumonia, PERCH aims to provide data that reflect the epidemiologic situation in developing countries in 2015, using pneumococcal and Haemophilus influenzae type b vaccines. PERCH will also address differences in host, environmental, and/or geographic factors that might determine pneumonia etiology and, by preserving specimens, will generate a resource for future research and pathogen discovery. PMID:22403238
Levine, Orin S; O'Brien, Katherine L; Deloria-Knoll, Maria; Murdoch, David R; Feikin, Daniel R; DeLuca, Andrea N; Driscoll, Amanda J; Baggett, Henry C; Brooks, W Abdullah; Howie, Stephen R C; Kotloff, Karen L; Madhi, Shabir A; Maloney, Susan A; Sow, Samba; Thea, Donald M; Scott, J Anthony
2012-04-01
The Pneumonia Etiology Research for Child Health (PERCH) project is a 7-country, standardized, comprehensive evaluation of the etiologic agents causing severe pneumonia in children from developing countries. During previous etiology studies, between one-quarter and one-third of patients failed to yield an obvious etiology; PERCH will employ and evaluate previously unavailable innovative, more sensitive diagnostic techniques. Innovative and rigorous epidemiologic and analytic methods will be used to establish the causal association between presence of potential pathogens and pneumonia. By strategic selection of study sites that are broadly representative of regions with the greatest burden of childhood pneumonia, PERCH aims to provide data that reflect the epidemiologic situation in developing countries in 2015, using pneumococcal and Haemophilus influenzae type b vaccines. PERCH will also address differences in host, environmental, and/or geographic factors that might determine pneumonia etiology and, by preserving specimens, will generate a resource for future research and pathogen discovery.
Tuomivaara, Sami T; Yaoi, Katsuro; O'Neill, Malcolm A; York, William S
2015-01-30
Xyloglucans are structurally complex plant cell wall polysaccharides that are involved in cell growth and expansion, energy metabolism, and signaling. Determining the structure-function relationships of xyloglucans would benefit from the availability of a comprehensive and structurally diverse collection of rigorously characterized xyloglucan oligosaccharides. Here, we present a workflow for the semi-preparative scale generation and purification of neutral and acidic xyloglucan oligosaccharides using a combination of enzymatic and chemical treatments and size-exclusion chromatography. Twenty-six of these oligosaccharides were purified to near homogeneity and their structures validated using a combination of matrix-assisted laser desorption/ionization mass spectrometry, high-performance anion exchange chromatography, and 1H nuclear magnetic resonance spectroscopy. Mass spectrometry and analytical chromatography were compared as methods for xyloglucan oligosaccharide quantification. 1H chemical shifts were assigned using two-dimensional correlation spectroscopy. A comprehensive update of the nomenclature describing xyloglucan side-chain structures is provided for reference. Copyright © 2014 Elsevier Ltd. All rights reserved.
Respect for cultural diversity and the empirical turn in bioethics: a plea for caution
Mbugua, Karori
2012-01-01
In the last two decades, there have been numerous calls for a culturally sensitive bioethics. At the same time, bioethicists have become increasingly involved in empirical research, which is a sign of dissatisfaction with the analytic methods of traditional bioethics. In this article, I will argue that although these developments have broadened and enriched the field of bioethics, they can easily be construed to be an endorsement of ethical relativism, especially by those not well grounded in academic moral philosophy. I maintain that bioethicists must resist the temptation of moving too quickly from cultural relativism to ethical relativism and from empirical findings to normative conclusions. Indeed, anyone who reasons in this way is guilty of the naturalistic fallacy. I conclude by saying that properly conceptualized, empirical research and sensitivity to cultural diversity should give rise to objective rational discourse and criticism and not indiscriminate tolerance of every possible moral practice. Bioethics must remain a normative discipline that is characterized by rigorous argumentation. PMID:23908754
Stability switches of arbitrary high-order consensus in multiagent networks with time delays.
Yang, Bo
2013-01-01
High-order consensus seeking, in which individual high-order dynamic agents share a consistent view of the objectives and the world in a distributed manner, finds its potential broad applications in the field of cooperative control. This paper presents stability switches analysis of arbitrary high-order consensus in multiagent networks with time delays. By employing a frequency domain method, we explicitly derive analytical equations that clarify a rigorous connection between the stability of general high-order consensus and the system parameters such as the network topology, communication time-delays, and feedback gains. Particularly, our results provide a general and a fairly precise notion of how increasing communication time-delay causes the stability switches of consensus. Furthermore, under communication constraints, the stability and robustness problems of consensus algorithms up to third order are discussed in details to illustrate our central results. Numerical examples and simulation results for fourth-order consensus are provided to demonstrate the effectiveness of our theoretical results.
Glycoconjugate Vaccines: The Regulatory Framework.
Jones, Christopher
2015-01-01
Most vaccines, including the currently available glycoconjugate vaccines, are administered to healthy infants, to prevent future disease. The safety of a prospective vaccine is a key prerequisite for approval. Undesired side effects would not only have the potential to damage the individual infant but also lead to a loss of confidence in the respective vaccine-or vaccines in general-on a population level. Thus, regulatory requirements, particularly with regard to safety, are extremely rigorous. This chapter highlights regulatory aspects on carbohydrate-based vaccines with an emphasis on analytical approaches to ensure the consistent quality of successive manufacturing lots.
Spontaneous oscillations in microfluidic networks
NASA Astrophysics Data System (ADS)
Case, Daniel; Angilella, Jean-Regis; Motter, Adilson
2017-11-01
Precisely controlling flows within microfluidic systems is often difficult which typically results in systems being heavily reliant on numerous external pumps and computers. Here, I present a simple microfluidic network that exhibits flow rate switching, bistablity, and spontaneous oscillations controlled by a single pressure. That is, by solely changing the driving pressure, it is possible to switch between an oscillating and steady flow state. Such functionality does not rely on external hardware and may even serve as an on-chip memory or timing mechanism. I use an analytic model and rigorous fluid dynamics simulations to show these results.
Effective grating theory for resonance domain surface-relief diffraction gratings.
Golub, Michael A; Friesem, Asher A
2005-06-01
An effective grating model, which generalizes effective-medium theory to the case of resonance domain surface-relief gratings, is presented. In addition to the zero order, it takes into account the first diffraction order, which obeys the Bragg condition. Modeling the surface-relief grating as an effective grating with two diffraction orders provides closed-form analytical relationships between efficiency and grating parameters. The aspect ratio, the grating period, and the required incidence angle that would lead to high diffraction efficiencies are predicted for TE and TM polarization and verified by rigorous numerical calculations.
Diffusion in Deterministic Interacting Lattice Systems
NASA Astrophysics Data System (ADS)
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
Light focusing using epsilon-near-zero metamaterials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Weiren, E-mail: weiren.zhu@monash.edu; Premaratne, Malin; Si, Li-Ming, E-mail: lms@bit.edu.cn
2013-11-15
We present a strategy of focusing light using epsilon-near-zero metamaterials with embedded dielectric cylinder. The focusing mechanism is analytically investigated, and its accuracy is substantiated by rigorous full-wave simulations. It is found that the focusing intensity is highly depend on the embedded medium and its size, and the magnetic field amplitude of the focused beam itself can reach as high as 98.2 times the incident field. Owing to its versatility, the proposed light focusing system is sure to find applications in fields such as bio-sensing and in nonlinear optics.
Supervised learning of probability distributions by neural networks
NASA Technical Reports Server (NTRS)
Baum, Eric B.; Wilczek, Frank
1988-01-01
Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.
Droplet Combustion in a Slow Convective Flow
NASA Technical Reports Server (NTRS)
Nayagam, V.; Hicks, M. C.; Ackerman, M.; Haggard, J. B., Jr.; Williams, F. A.
2003-01-01
The influences of slow convective flow on droplet combustion, particularly in the low Reynolds number regime, have received very little attention in the past. Most studies in the literature are semi-empirical in nature and they were motivated by spray combustion applications in the moderate to high Reynolds number regime. None of the limited number of fundamental theoretical studies applicable to low Reynolds numbers have been verified by rigorous experimental data. Moreover, many unsteady phenomena associated with fluid-dynamic unsteadiness, such as impulsive starting or stopping of a burning droplet, or flow acceleration/deceleration effects, have not been investigated despite their importance in practical applications. In this study we investigate the effects of slow convection on droplet burning dynamics both experimentally and theoretically. The experimental portion of the study involves both ground-based experiments in the drop towers and future flight experiments on board the International Space Station. Heptane and methanol are used as test fuels, and this choice complements the quiescent-environment studies of the Droplet Combustion Experiment (DCE). An analytical model that employs the method of matched asymptotic expansions and uses the ratio of the convective velocity far from the droplet to the Stefan velocity at its surface as the small parameter for expansion has also been developed as a part of this investigation. Results from the ground-based experiments and comparison with the analytical model are presented in this report.
Arnich, Nathalie; Maignien, Thomas; Biré, Ronel
2018-01-01
The neurotoxin β-N-methylamino-l-alanine (BMAA), a non-protein amino acid produced by terrestrial and aquatic cyanobacteria and by micro-algae, has been suggested to play a role as an environmental factor in the neurodegenerative disease Amyotrophic Lateral Sclerosis-Parkinsonism-Dementia complex (ALS-PDC). The ubiquitous presence of BMAA in aquatic environments and organisms along the food chain potentially makes it public health concerns. However, the BMAA-associated human health risk remains difficult to rigorously assess due to analytical challenges associated with the detection and quantification of BMAA and its natural isomers, 2,4-diamino butyric acid (DAB), β-amino-N-methyl-alanine (BAMA) and N-(2-aminoethyl) glycine (AEG). This systematic review, reporting the current knowledge on the presence of BMAA and isomers in aquatic environments and human food sources, was based on a selection and a score numbering of the scientific literature according to various qualitative and quantitative criteria concerning the chemical analytical methods used. Results from the best-graded studies show that marine bivalves are to date the matrix containing the higher amount of BMAA, far more than most fish muscles, but with an exception for shark cartilage. This review discusses the available data in terms of their use for human health risk assessment and identifies knowledge gaps requiring further investigations. PMID:29443939
Liang, Ruoyu; Song, Shuai; Shi, Yajing; Shi, Yajuan; Lu, Yonglong; Zheng, Xiaoqi; Xu, Xiangbo; Wang, Yurong; Han, Xuesong
2017-12-15
The redundancy or deficiency of selenium in soils can cause adverse effects on crops and even threaten human health. It was necessary to assess selenium resources with a rigorous scientific appraisal. Previous studies of selenium resource assessment were usually carried out using a single index evaluation. A multi-index evaluation method (analytic hierarchy process) was used in this study to establish a comprehensive assessment system based on consideration of selenium content, soil nutrients and soil environmental quality. The criteria for the comprehensive assessment system were classified by summing critical values in the standards with weights and a Geographical Information System was used to reflect the regional distribution of the assessment results. Boshan, a representative region for developing selenium-rich agriculture, was taken as a case area and classified into Zone I-V, which suggested priority areas for developing selenium-rich agriculture. Most parts of the North and Midlands of Boshan were relatively suitable for development of selenium-rich agriculture. Soils in south fractions were contaminated by Cd, PAHs, HCHs and DDTs, in which it was forbidden to farm. This study was expected to provide the basis for developing selenium-rich agriculture and an example for comprehensive evaluation of relevant resources in a region. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nguyen, Thuong T.; Székely, Eszter; Imbalzano, Giulio; Behler, Jörg; Csányi, Gábor; Ceriotti, Michele; Götz, Andreas W.; Paesani, Francesco
2018-06-01
The accurate representation of multidimensional potential energy surfaces is a necessary requirement for realistic computer simulations of molecular systems. The continued increase in computer power accompanied by advances in correlated electronic structure methods nowadays enables routine calculations of accurate interaction energies for small systems, which can then be used as references for the development of analytical potential energy functions (PEFs) rigorously derived from many-body (MB) expansions. Building on the accuracy of the MB-pol many-body PEF, we investigate here the performance of permutationally invariant polynomials (PIPs), neural networks, and Gaussian approximation potentials (GAPs) in representing water two-body and three-body interaction energies, denoting the resulting potentials PIP-MB-pol, Behler-Parrinello neural network-MB-pol, and GAP-MB-pol, respectively. Our analysis shows that all three analytical representations exhibit similar levels of accuracy in reproducing both two-body and three-body reference data as well as interaction energies of small water clusters obtained from calculations carried out at the coupled cluster level of theory, the current gold standard for chemical accuracy. These results demonstrate the synergy between interatomic potentials formulated in terms of a many-body expansion, such as MB-pol, that are physically sound and transferable, and machine-learning techniques that provide a flexible framework to approximate the short-range interaction energy terms.
Garbarino, John R.; Hoffman, Gerald L.
1999-01-01
A hydrochloric acid in-bottle digestion procedure is used to partially digest wholewater samples prior to determining recoverable elements by various analytical methods. The use of hydrochloric acid is problematic for some methods of analysis because of spectral interference. The inbottle digestion procedure has been modified to eliminate such interference by using nitric acid instead of hydrochloric acid in the digestion. Implications of this modification are evaluated by comparing results for a series of synthetic whole-water samples. Results are also compared with those obtained by using U.S. Environmental Protection Agency (1994) (USEPA) Method 200.2 total-recoverable digestion procedure. Percentage yields that use the nitric acid inbottle digestion procedure are within 10 percent of the hydrochloric acid in-bottle yields for 25 of the 26 elements determined in two of the three synthetic whole-water samples tested. Differences in percentage yields for the third synthetic whole-water sample were greater than 10 percent for 16 of the 26 elements determined. The USEPA method was the most rigorous for solubilizing elements from particulate matter in all three synthetic whole-water samples. Nevertheless, the variability in the percentage yield by using the USEPA digestion procedure was generally greater than the in-bottle digestion procedure, presumably because of the difficulty in controlling the digestion conditions accurately.
Numerical assessment of low-frequency dosimetry from sampled magnetic fields
NASA Astrophysics Data System (ADS)
Freschi, Fabio; Giaccone, Luca; Cirimele, Vincenzo; Canova, Aldo
2018-01-01
Low-frequency dosimetry is commonly assessed by evaluating the electric field in the human body using the scalar potential finite difference method. This method is effective only when the sources of the magnetic field are completely known and the magnetic vector potential can be analytically computed. The aim of the paper is to present a rigorous method to characterize the source term when only the magnetic flux density is available at discrete points, e.g. in case of field measurements. The method is based on the solution of the discrete magnetic curl equation. The system is restricted to the independent set of magnetic fluxes and circulations of magnetic vector potential using the topological information of the computational mesh. The solenoidality of the magnetic flux density is preserved using a divergence-free interpolator based on vector radial basis functions. The analysis of a benchmark problem shows that the complexity of the proposed algorithm is linearly dependent on the number of elements with a controllable accuracy. The method proposed in this paper also proves to be useful and effective when applied to a real world scenario, where the magnetic flux density is measured in proximity of a power transformer. A 8 million voxel body model is then used for the numerical dosimetric analysis. The complete assessment is completed in less than 5 min, that is more than acceptable for these problems.
Numerical assessment of low-frequency dosimetry from sampled magnetic fields.
Freschi, Fabio; Giaccone, Luca; Cirimele, Vincenzo; Canova, Aldo
2017-12-29
Low-frequency dosimetry is commonly assessed by evaluating the electric field in the human body using the scalar potential finite difference method. This method is effective only when the sources of the magnetic field are completely known and the magnetic vector potential can be analytically computed. The aim of the paper is to present a rigorous method to characterize the source term when only the magnetic flux density is available at discrete points, e.g. in case of field measurements. The method is based on the solution of the discrete magnetic curl equation. The system is restricted to the independent set of magnetic fluxes and circulations of magnetic vector potential using the topological information of the computational mesh. The solenoidality of the magnetic flux density is preserved using a divergence-free interpolator based on vector radial basis functions. The analysis of a benchmark problem shows that the complexity of the proposed algorithm is linearly dependent on the number of elements with a controllable accuracy. The method proposed in this paper also proves to be useful and effective when applied to a real world scenario, where the magnetic flux density is measured in proximity of a power transformer. A 8 million voxel body model is then used for the numerical dosimetric analysis. The complete assessment is completed in less than 5 min, that is more than acceptable for these problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laubach, S.E.; Marrett, R.; Rossen, W.
The research for this project provides new technology to understand and successfully characterize, predict, and simulate reservoir-scale fractures. Such fractures have worldwide importance because of their influence on successful extraction of resources. The scope of this project includes creation and testing of new methods to measure, interpret, and simulate reservoir fractures that overcome the challenge of inadequate sampling. The key to these methods is the use of microstructures as guides to the attributes of the large fractures that control reservoir behavior. One accomplishment of the project research is a demonstration that these microstructures can be reliably and inexpensively sampled. Specificmore » goals of this project were to: create and test new methods of measuring attributes of reservoir-scale fractures, particularly as fluid conduits, and test the methods on samples from reservoirs; extrapolate structural attributes to the reservoir scale through rigorous mathematical techniques and help build accurate and useful 3-D models of the interwell region; and design new ways to incorporate geological and geophysical information into reservoir simulation and verify the accuracy by comparison with production data. New analytical methods developed in the project are leading to a more realistic characterization of fractured reservoir rocks. Testing diagnostic and predictive approaches was an integral part of the research, and several tests were successfully completed.« less
Broekhuis, Femke; Gopalaswamy, Arjun M.
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614
Broekhuis, Femke; Gopalaswamy, Arjun M
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
A new approach to analytic, non-perturbative and gauge-invariant QCD
NASA Astrophysics Data System (ADS)
Fried, H. M.; Grandou, T.; Sheu, Y.-M.
2012-11-01
Following a previous calculation of quark scattering in eikonal approximation, this paper presents a new, analytic and rigorous approach to the calculation of QCD phenomena. In this formulation a basic distinction between the conventional "idealistic" description of QCD and a more "realistic" description is brought into focus by a non-perturbative, gauge-invariant evaluation of the Schwinger solution for the QCD generating functional in terms of the exact Fradkin representations of Green's functional G(x,y|A) and the vacuum functional L[A]. Because quarks exist asymptotically only in bound states, their transverse coordinates can never be measured with arbitrary precision; the non-perturbative neglect of this statement leads to obstructions that are easily corrected by invoking in the basic Lagrangian a probability amplitude which describes such transverse imprecision. The second result of this non-perturbative analysis is the appearance of a new and simplifying output called "Effective Locality", in which the interactions between quarks by the exchange of a "gluon bundle"-which "bundle" contains an infinite number of gluons, including cubic and quartic gluon interactions-display an exact locality property that reduces the several functional integrals of the formulation down to a set of ordinary integrals. It should be emphasized that "non-perturbative" here refers to the effective summation of all gluons between a pair of quark lines-which may be the same quark line, as in a self-energy graph-but does not (yet) include a summation over all closed-quark loops which are tied by gluon-bundle exchange to the rest of the "Bundle Diagram". As an example of the power of these methods we offer as a first analytic calculation the quark-antiquark binding potential of a pion, and the corresponding three-quark binding potential of a nucleon, obtained in a simple way from relevant eikonal scattering approximations. A second calculation, analytic, non-perturbative and gauge-invariant, of a nucleon-nucleon binding potential to form a model deuteron, will appear separately.
NASA Astrophysics Data System (ADS)
B. Franz, Heather; G. Trainer, Melissa; H. Wong, Michael; L. K. Manning, Heidi; C. Stern, Jennifer; R. Mahaffy, Paul; K. Atreya, Sushil; Benna, Mehdi; G. Conrad, Pamela; N. Harpold, Dan; A. Leshin, Laurie; A. Malespin, Charles; P. McKay, Christopher; Thomas Nolan, J.; Raaen, Eric
2014-06-01
The Sample Analysis at Mars (SAM) instrument suite is the largest scientific payload on the Mars Science Laboratory (MSL) Curiosity rover, which landed in Mars' Gale Crater in August 2012. As a miniature geochemical laboratory, SAM is well-equipped to address multiple aspects of MSL's primary science goal, characterizing the potential past or present habitability of Gale Crater. Atmospheric measurements support this goal through compositional investigations relevant to martian climate evolution. SAM instruments include a quadrupole mass spectrometer, a tunable laser spectrometer, and a gas chromatograph that are used to analyze martian atmospheric gases as well as volatiles released by pyrolysis of solid surface materials (Mahaffy et al., 2012). This report presents analytical methods for retrieving the chemical and isotopic composition of Mars' atmosphere from measurements obtained with SAM's quadrupole mass spectrometer. It provides empirical calibration constants for computing volume mixing ratios of the most abundant atmospheric species and analytical functions to correct for instrument artifacts and to characterize measurement uncertainties. Finally, we discuss differences in volume mixing ratios of the martian atmosphere as determined by SAM (Mahaffy et al., 2013) and Viking (Owen et al., 1977; Oyama and Berdahl, 1977) from an analytical perspective. Although the focus of this paper is atmospheric observations, much of the material concerning corrections for instrumental effects also applies to reduction of data acquired with SAM from analysis of solid samples. The Sample Analysis at Mars (SAM) instrument measures the composition of the martian atmosphere. Rigorous calibration of SAM's mass spectrometer was performed with relevant gas mixtures. Calibration included derivation of a new model to correct for electron multiplier effects. Volume mixing ratios for Ar and N2 obtained with SAM differ from those obtained with Viking. Differences between SAM and Viking volume mixing ratios are under investigation.
Wang, Shirley V; Schneeweiss, Sebastian; Berger, Marc L; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J; Gini, Rosa; Klungel, Olaf; Mullins, C Daniel; Nguyen, Michael D; Rassen, Jeremy A; Smeeth, Liam; Sturkenboom, Miriam
2017-09-01
Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Health Monitoring of a Rotating Disk Using a Combined Analytical-Experimental Approach
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Woike, Mark R.; Lekki, John D.; Baaklini, George Y.
2009-01-01
Rotating disks undergo rigorous mechanical loading conditions that make them subject to a variety of failure mechanisms leading to structural deformities and cracking. During operation, periodic loading fluctuations and other related factors cause fractures and hidden internal cracks that can only be detected via noninvasive types of health monitoring and/or nondestructive evaluation. These evaluations go further to inspect material discontinuities and other irregularities that have grown to become critical defects that can lead to failure. Hence, the objectives of this work is to conduct a collective analytical and experimental study to present a well-rounded structural assessment of a rotating disk by means of a health monitoring approach and to appraise the capabilities of an in-house rotor spin system. The analyses utilized the finite element method to analyze the disk with and without an induced crack at different loading levels, such as rotational speeds starting at 3000 up to 10 000 rpm. A parallel experiment was conducted to spin the disk at the desired speeds in an attempt to correlate the experimental findings with the analytical results. The testing involved conducting spin experiments which, covered the rotor in both damaged and undamaged (i.e., notched and unnotched) states. Damaged disks had artificially induced through-thickness flaws represented in the web region ranging from 2.54 to 5.08 cm (1 to 2 in.) in length. This study aims to identify defects that are greater than 1.27 cm (0.5 in.), applying available means of structural health monitoring and nondestructive evaluation, and documenting failure mechanisms experienced by the rotor system under typical turbine engine operating conditions.
Goldie, Sue
2006-11-01
Cervical cancer remains a leading cause of cancer death among women living in low-resource settings. In the last 3 decades, cytologic screening has -in theory -been available and yet more than 6 million women have died of this preventable disease. The necessary resources, infrastructure, and technological expertise, together with the need for repeated screenings at regular intervals, make cytologic screening difficult to implement in poor countries. As noncytologic approaches for the detection of HPV, simple visual screening methods for anogenital lesions caused by HPV, and the availability of an HPV-16/18 vaccine will enhance the linkage between screening and treatment, multiple factors will need to be considered when designing new, or modifying existing prevention strategies. Countryspecific decisions regarding the best strategy for cervical cancer control will need to rely on data from many sources and take into account complex epidemiologic, economic, social, political, and cultural factors, and be made despite uncertainty and incomplete information. A rigorous decision analytic approach using computerbased modeling methods enables linkage of the knowledge gained from empirical studies to real-world situations. This chapter provides an introduction to these methods, reviews lessons learned from cost-effectiveness analyses of cervical cancer screening in developed and developing countries, and emphasizes important qualitative themes to consider in designing cervical cancer prevention policies.
Phase transitions in community detection: A solvable toy model
NASA Astrophysics Data System (ADS)
Ver Steeg, Greg; Moore, Cristopher; Galstyan, Aram; Allahverdyan, Armen
2014-05-01
Recently, it was shown that there is a phase transition in the community detection problem. This transition was first computed using the cavity method, and has been proved rigorously in the case of q = 2 groups. However, analytic calculations using the cavity method are challenging since they require us to understand probability distributions of messages. We study analogous transitions in the so-called “zero-temperature inference” model, where this distribution is supported only on the most likely messages. Furthermore, whenever several messages are equally likely, we break the tie by choosing among them with equal probability, corresponding to an infinitesimal random external field. While the resulting analysis overestimates the thresholds, it reproduces some of the qualitative features of the system. It predicts a first-order detectability transition whenever q > 2 (as opposed to q > 4 according to the finite-temperature cavity method). It also has a regime analogous to the “hard but detectable” phase, where the community structure can be recovered, but only when the initial messages are sufficiently accurate. Finally, we study a semisupervised setting where we are given the correct labels for a fraction ρ of the nodes. For q > 2, we find a regime where the accuracy jumps discontinuously at a critical value of ρ.
Low velocity impact analysis of composite laminated plates
NASA Astrophysics Data System (ADS)
Zheng, Daihua
2007-12-01
In the past few decades polymer composites have been utilized more in structures where high strength and light weight are major concerns, e.g., aircraft, high-speed boats and sports supplies. It is well known that they are susceptible to damage resulting from lateral impact by foreign objects, such as dropped tools, hail and debris thrown up from the runway. The impact response of the structures depends not only on the material properties but also on the dynamic behavior of the impacted structure. Although commercial software is capable of analyzing such impact processes, it often requires extensive expertise and rigorous training for design and analysis. Analytical models are useful as they allow parametric studies and provide a foundation for validating the numerical results from large-scale commercial software. Therefore, it is necessary to develop analytical or semi-analytical models to better understand the behaviors of composite structures under impact and their associated failure process. In this study, several analytical models are proposed in order to analyze the impact response of composite laminated plates. Based on Meyer's Power Law, a semi-analytical model is obtained for small mass impact response of infinite composite laminates by the method of asymptotic expansion. The original nonlinear second-order ordinary differential equation is transformed into two linear ordinary differential equations. This is achieved by neglecting high-order terms in the asymptotic expansion. As a result, the semi-analytical solution of the overall impact response can be applied to contact laws with varying coefficients. Then an analytical model accounting for permanent deformation based on an elasto-plastic contact law is proposed to obtain the closed-form solutions of the wave-controlled impact responses of composite laminates. The analytical model is also used to predict the threshold velocity for delamination onset by combining with an existing quasi-static delamination criterion. The predictions are compared with experimental data and explicit finite element LS-DYNA simulation. The comparisons show reasonable agreement. Furthermore, an analytical model is developed to evaluate the combined effects of prestresses and permanent deformation based on the linearized elasto-plastic contact law and the Laplace Transform technique. It is demonstrated that prestresses do not have noticeable effects on the time history of contact force and strains, but they have significant consequences on the plate central displacement. For a impacted composite laminate with the presence of prestresses, the contact force increases with the increasing of the mass of impactor, thickness and interlaminar shear strength of the laminate. The combined analytical and numerical investigations provide validated models for elastic and elasto-plastic impact analysis of composite structures and shed light on the design of impact-resistant composite systems.
Misimi, E; Erikson, U; Digre, H; Skavhaug, A; Mathiassen, J R
2008-03-01
The present study describes the possibilities for using computer vision-based methods for the detection and monitoring of transient 2D and 3D changes in the geometry of a given product. The rigor contractions of unstressed and stressed fillets of Atlantic salmon (Salmo salar) and Atlantic cod (Gadus morhua) were used as a model system. Gradual changes in fillet shape and size (area, length, width, and roundness) were recorded for 7 and 3 d, respectively. Also, changes in fillet area and height (cross-section profiles) were tracked using a laser beam and a 3D digital camera. Another goal was to compare rigor developments of the 2 species of farmed fish, and whether perimortem stress affected the appearance of the fillets. Some significant changes in fillet size and shape were found (length, width, area, roundness, height) between unstressed and stressed fish during the course of rigor mortis as well as after ice storage (postrigor). However, the observed irreversible stress-related changes were small and would hardly mean anything for postrigor fish processors or consumers. The cod were less stressed (as defined by muscle biochemistry) than the salmon after the 2 species had been subjected to similar stress bouts. Consequently, the difference between the rigor courses of unstressed and stressed fish was more extreme in the case of salmon. However, the maximal whole fish rigor strength was judged to be about the same for both species. Moreover, the reductions in fillet area and length, as well as the increases in width, were basically of similar magnitude for both species. In fact, the increases in fillet roundness and cross-section height were larger for the cod. We conclude that the computer vision method can be used effectively for automated monitoring of changes in 2D and 3D shape and size of fish fillets during rigor mortis and ice storage. In addition, it can be used for grading of fillets according to uniformity in size and shape, as well as measurement of fillet yield measured in thickness. The methods are accurate, rapid, nondestructive, and contact-free and can therefore be regarded as suitable for industrial purposes.
Methods for Evaluating Natural Experiments in Obesity: A Systematic Review.
Bennett, Wendy L; Wilson, Renee F; Zhang, Allen; Tseng, Eva; Knapp, Emily A; Kharrazi, Hadi; Stuart, Elizabeth A; Shogbesan, Oluwaseun; Bass, Eric B; Cheskin, Lawrence J
2018-06-05
Given the obesity pandemic, rigorous methodological approaches, including natural experiments, are needed. To identify studies that report effects of programs, policies, or built environment changes on obesity prevention and control and to describe their methods. PubMed, CINAHL, PsycINFO, and EconLit (January 2000 to August 2017). Natural experiments and experimental studies evaluating a program, policy, or built environment change in U.S. or non-U.S. populations by using measures of obesity or obesity-related health behaviors. 2 reviewers serially extracted data on study design, population characteristics, data sources and linkages, measures, and analytic methods and independently evaluated risk of bias. 294 studies (188 U.S., 106 non-U.S.) were identified, including 156 natural experiments (53%), 118 experimental studies (40%), and 20 (7%) with unclear study design. Studies used 106 (71 U.S., 35 non-U.S.) data systems; 37% of the U.S. data systems were linked to another data source. For outcomes, 112 studies reported childhood weight and 32 adult weight; 152 had physical activity and 148 had dietary measures. For analysis, natural experiments most commonly used cross-sectional comparisons of exposed and unexposed groups (n = 55 [35%]). Most natural experiments had a high risk of bias, and 63% had weak handling of withdrawals and dropouts. Outcomes restricted to obesity measures and health behaviors; inconsistent or unclear descriptions of natural experiment designs; and imperfect methods for assessing risk of bias in natural experiments. Many methodologically diverse natural experiments and experimental studies were identified that reported effects of U.S. and non-U.S. programs, policies, or built environment changes on obesity prevention and control. The findings reinforce the need for methodological and analytic advances that would strengthen evaluations of obesity prevention and control initiatives. National Institutes of Health, Office of Disease Prevention, and Agency for Healthcare Research and Quality. (PROSPERO: CRD42017055750).
Della Pelle, Flavio; González, María Cristina; Sergi, Manuel; Del Carlo, Michele; Compagnone, Dario; Escarpa, Alberto
2015-07-07
In this work, a rapid and simple gold nanoparticle (AuNPs)-based colorimetric assay meets a new type of synthesis of AuNPs in organic medium requiring no sample extraction. The AuNPs synthesis extraction-free approach strategically involves the use of dimethyl sulfoxide (DMSO) acting as an organic solvent for simultaneous sample analyte solubilization and AuNPs stabilization. Moreover, DMSO works as a cryogenic protector avoiding solidification at the temperatures used to block the synthesis. In addition, the chemical function as AuNPs stabilizers of the sample endogenous fatty acids is also exploited, avoiding the use of common surfactant AuNPs stabilizers, which, in an organic/aqueous medium, rise to the formation of undesirable emulsions. This is controlled by adding a fat analyte free sample (sample blank). The method was exhaustively applied for the determination of total polyphenols in two selected kinds of fat-rich liquid and solid samples with high antioxidant activity and economic impact: olive oil (n = 28) and chocolate (n = 16) samples. Fatty sample absorbance is easily followed by the absorption band of localized surface plasmon resonance (LSPR) at 540 nm and quantitation is refereed to gallic acid equivalents. A rigorous evaluation of the method was performed by comparison with the well and traditionally established Folin-Ciocalteu (FC) method, obtaining an excellent correlation for olive oil samples (R = 0.990, n = 28) and for chocolate samples (R = 0.905, n = 16). Additionally, it was also found that the proposed approach was selective (vs other endogenous sample tocopherols and pigments), fast (15-20 min), cheap and simple (does not require expensive/complex equipment), with a very limited amount of sample (30 μL) needed and a significant lower solvent consumption (250 μL in 500 μL total reaction volume) compared to classical methods.
Rigorous coupled wave analysis of acousto-optics with relativistic considerations.
Xia, Guoqiang; Zheng, Weijian; Lei, Zhenggang; Zhang, Ruolan
2015-09-01
A relativistic analysis of acousto-optics is presented, and a rigorous coupled wave analysis is generalized for the diffraction of the acousto-optical effect. An acoustic wave generates a grating with temporally and spatially modulated permittivity, hindering direct applications of the rigorous coupled wave analysis for the acousto-optical effect. In a reference frame which moves with the acoustic wave, the grating is static, the medium moves, and the coupled wave equations for the static grating may be derived. Floquet's theorem is then applied to cast these equations into an eigenproblem. Using a Lorentz transformation, the electromagnetic fields in the grating region are transformed to the lab frame where the medium is at rest, and relativistic Doppler frequency shifts are introduced into various diffraction orders. In the lab frame, the boundary conditions are considered and the diffraction efficiencies of various orders are determined. This method is rigorous and general, and the plane waves in the resulting expansion satisfy the dispersion relation of the medium and are propagation modes. Properties of various Bragg diffractions are results, rather than preconditions, of this method. Simulations of an acousto-optical tunable filter made by paratellurite, TeO(2), are given as examples.
Nonlinear asymmetric tearing mode evolution in cylindrical geometry
Teng, Qian; Ferraro, N.; Gates, David A.; ...
2016-10-27
The growth of a tearing mode is described by reduced MHD equations. For a cylindrical equilibrium, tearing mode growth is governed by the modified Rutherford equation, i.e., the nonlinear Δ'(w). For a low beta plasma without external heating, Δ'(w) can be approximately described by two terms, Δ' ql(w), Δ'A(w). In this work, we present a simple method to calculate the quasilinear stability index Δ'ql rigorously, for poloidal mode number m ≥ 2. Δ' ql is derived by solving the outer equation through the Frobenius method. Δ'ql is composed of four terms proportional to: constant Δ' 0, w, wlnw, and w2.more » Δ' A is proportional to the asymmetry of island that is roughly proportional to w. The sum of Δ' ql and Δ' A is consistent with the more accurate expression calculated perturbatively. The reduced MHD equations are also solved numerically through a 3D MHD code M3D-C1. The analytical expression of the perturbed helical flux and the saturated island width agree with the simulation results. Lastly, it is also confirmed by the simulation that the Δ' A has to be considered in calculating island saturation.« less
Some Properties of Generalized Connections in Quantum Gravity
NASA Astrophysics Data System (ADS)
Velhinho, J. M.
2002-12-01
Theories of connections play an important role in fundamental interactions, including Yang-Mills theories and gravity in the Ashtekar formulation. Typically in such cases, the classical configuration space {A}/ {G} of connections modulo gauge transformations is an infinite dimensional non-linear space of great complexity. Having in mind a rigorous quantization procedure, methods of functional calculus in an extension of {A}/ {G} have been developed. For a compact gauge group G, the compact space /line { {A}{ {/}} {G}} ( ⊃ {A}/ {G}) introduced by Ashtekar and Isham using C*-algebraic methods is a natural candidate to replace {A}/ {G} in the quantum context, 1 allowing the construction of diffeomorphism invariant measures. 2,3,4 Equally important is the space of generalized connections bar {A} introduced in a similar way by Baez. 5 bar {A} is particularly useful for the definition of vector fields in /line { {A}{ {/}} {G}} , fundamental in the construction of quantum observables. 6 These works crucially depend on the use of (generalized) Wilson variables associated to certain types of curves. We will consider the case of piecewise analytic curves, 1,2,5 althought most of the arguments apply equally to the piecewise smooth case. 7,8...
Thermal conduction in particle packs via finite elements
NASA Astrophysics Data System (ADS)
Lechman, Jeremy B.; Yarrington, Cole; Erikson, William; Noble, David R.
2013-06-01
Conductive transport in heterogeneous materials composed of discrete particles is a fundamental problem for a number of applications. While analytical results and rigorous bounds on effective conductivity in mono-sized particle dispersions are well established in the literature, the methods used to arrive at these results often fail when the average size of particle clusters becomes large (i.e., near the percolation transition where particle contact networks dominate the bulk conductivity). Our aim is to develop general, efficient numerical methods that would allow us to explore this behavior and compare to a recent microstructural description of conduction in this regime. To this end, we present a finite element analysis approach to modeling heat transfer in granular media with the goal of predicting effective bulk thermal conductivities of particle-based heterogeneous composites. Our approach is verified against theoretical predictions for random isotropic dispersions of mono-disperse particles at various volume fractions up to close packing. Finally, we present results for the probability distribution of the effective conductivity in particle dispersions generated by Brownian dynamics, and suggest how this might be useful in developing stochastic models of effective properties based on the dynamical process involved in creating heterogeneous dispersions.
Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.
Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng
2015-01-01
Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.
Estimation of the time since death--reconsidering the re-establishment of rigor mortis.
Anders, Sven; Kunz, Michaela; Gehl, Axel; Sehner, Susanne; Raupach, Tobias; Beck-Bornholdt, Hans-Peter
2013-01-01
In forensic medicine, there is an undefined data background for the phenomenon of re-establishment of rigor mortis after mechanical loosening, a method used in establishing time since death in forensic casework that is thought to occur up to 8 h post-mortem. Nevertheless, the method is widely described in textbooks on forensic medicine. We examined 314 joints (elbow and knee) of 79 deceased at defined time points up to 21 h post-mortem (hpm). Data were analysed using a random intercept model. Here, we show that re-establishment occurred in 38.5% of joints at 7.5 to 19 hpm. Therefore, the maximum time span for the re-establishment of rigor mortis appears to be 2.5-fold longer than thought so far. These findings have major impact on the estimation of time since death in forensic casework.
Comment on atomic independent-particle models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doda, D.D.; Gravey, R.H.; Green, A.E.S.
1975-08-01
The Hartree-Fock-Slater (HFS) independent-particle model in the form developed by Hermann and Skillman (HS) and the Green, Sellin, and Zachor (GSZ) analytic independent-particle model are being used for many types of applications of atomic theory to avoid cumbersome, albeit more rigorous, many-body calculations. The single-electron eigenvalues obtained with these models are examined and it is found that the GSZ model is capable of yielding energy eigenvalues for valence electrons which are substantially closer to experimental values than are the results of HS-HFS calculations. With the aid of an analytic representation of the equivalent HS-HFS screening function, the difficulty with thismore » model is identified as a weakness of the potential in the neighborhood of the valence shell. Accurate representations of valence states are important in most atomic applications of the independent-particle model. (auth)« less
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Kellett, Stephen; Simmonds-Buckley, Mel; Totterdell, Peter
2017-08-18
The evidence base for treatment of hypersexuality disorder (HD) has few studies with appropriate methodological rigor. This study therefore conducted a single case experiment of cognitive analytic therapy (CAT) for HD using an A/B design with extended follow-up. Cruising, pornography usage, masturbation frequency and associated cognitions and emotions were measured daily in a 231-day time series. Following a three-week assessment baseline (A: 21 days), treatment was delivered via outpatient sessions (B: 147 days), with the follow-up period lasting 63 days. Results show that cruising and pornography usage extinguished. The total sexual outlet score no longer met caseness, and the primary nomothetic hypersexuality outcome measure met recovery criteria. Reduced pornography consumption was mediated by reduced obsessionality and greater interpersonal connectivity. The utility of the CAT model for intimacy problems shows promise. Directions for future HD outcome research are also provided.
Exact semiclassical expansions for one-dimensional quantum oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delabaere, E.; Dillinger, H.; Pham, F.
1997-12-01
A set of rules is given for dealing with WKB expansions in the one-dimensional analytic case, whereby such expansions are not considered as approximations but as exact encodings of wave functions, thus allowing for analytic continuation with respect to whichever parameters the potential function depends on, with an exact control of small exponential effects. These rules, which include also the case when there are double turning points, are illustrated on various examples, and applied to the study of bound state or resonance spectra. In the case of simple oscillators, it is thus shown that the Rayleigh{endash}Schr{umlt o}dinger series is Borelmore » resummable, yielding the exact energy levels. In the case of the symmetrical anharmonic oscillator, one gets a simple and rigorous justification of the Zinn-Justin quantization condition, and of its solution in terms of {open_quotes}multi-instanton expansions.{close_quotes} {copyright} {ital 1997 American Institute of Physics.}« less
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Moura, Lidia Mvr; Westover, M Brandon; Kwasnik, David; Cole, Andrew J; Hsu, John
2017-01-01
The elderly population faces an increasing number of cases of chronic neurological conditions, such as epilepsy and Alzheimer's disease. Because the elderly with epilepsy are commonly excluded from randomized controlled clinical trials, there are few rigorous studies to guide clinical practice. When the elderly are eligible for trials, they either rarely participate or frequently have poor adherence to therapy, thus limiting both generalizability and validity. In contrast, large observational data sets are increasingly available, but are susceptible to bias when using common analytic approaches. Recent developments in causal inference-analytic approaches also introduce the possibility of emulating randomized controlled trials to yield valid estimates. We provide a practical example of the application of the principles of causal inference to a large observational data set of patients with epilepsy. This review also provides a framework for comparative-effectiveness research in chronic neurological conditions.
The evolution of stable magnetic fields in stars: an analytical approach
NASA Astrophysics Data System (ADS)
Mestel, Leon; Moss, David
2010-07-01
The absence of a rigorous proof of the existence of dynamically stable, large-scale magnetic fields in radiative stars has been for many years a missing element in the fossil field theory for the magnetic Ap/Bp stars. Recent numerical simulations, by Braithwaite & Spruit and Braithwaite & Nordlund, have largely filled this gap, demonstrating convincingly that coherent global scale fields can survive for times of the order of the main-sequence lifetimes of A stars. These dynamically stable configurations take the form of magnetic tori, with linked poloidal and toroidal fields, that slowly rise towards the stellar surface. This paper studies a simple analytical model of such a torus, designed to elucidate the physical processes that govern its evolution. It is found that one-dimensional numerical calculations reproduce some key features of the numerical simulations, with radiative heat transfer, Archimedes' principle, Lorentz force and Ohmic decay all playing significant roles.
NASA Astrophysics Data System (ADS)
Piburn, J.; Stewart, R.; Myers, A.; Sorokine, A.; Axley, E.; Anderson, D.; Burdette, J.; Biddle, C.; Hohl, A.; Eberle, R.; Kaufman, J.; Morton, A.
2017-10-01
Spatiotemporal (ST) analytics applied to major data sources such as the World Bank and World Health Organization has shown tremendous value in shedding light on the evolution of cultural, health, economic, and geopolitical landscapes on a global level. WSTAMP engages this opportunity by situating analysts, data, and analytics together within a visually rich and computationally rigorous online analysis environment. Since introducing WSTAMP at the First International Workshop on Spatiotemporal Computing, several transformative advances have occurred. Collaboration with human computer interaction experts led to a complete interface redesign that deeply immerses the analyst within a ST context, significantly increases visual and textual content, provides navigational crosswalks for attribute discovery, substantially reduce mouse and keyboard actions, and supports user data uploads. Secondly, the database has been expanded to include over 16,000 attributes, 50 years of time, and 200+ nation states and redesigned to support non-annual, non-national, city, and interaction data. Finally, two new analytics are implemented for analyzing large portfolios of multi-attribute data and measuring the behavioral stability of regions along different dimensions. These advances required substantial new approaches in design, algorithmic innovations, and increased computational efficiency. We report on these advances and inform how others may freely access the tool.
NASA Astrophysics Data System (ADS)
Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea
2012-08-01
Following the idea of "cloaking by a surface" [A. Alù, Phys. Rev. B 80, 245115 (2009); P. Y. Chen and A. Alù, Phys. Rev. B 84, 205110 (2011)], we present a rigorous analytical model applicable to mantle cloaking of cylindrical objects using 1D and 2D sub-wavelength conformal frequency selective surface (FSS) elements. The model is based on Lorenz-Mie scattering theory which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. The FSS arrays considered in this work are composed of 1D horizontal and vertical metallic strips and 2D printed (patches, Jerusalem crosses, and cross dipoles) and slotted structures (meshes, slot-Jerusalem crosses, and slot-cross dipoles). It is shown that the analytical grid-impedance expressions derived for the planar arrays of sub-wavelength elements may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks. By properly tailoring the surface reactance of the cloak, the total scattering from the cylinder can be significantly reduced, thus rendering the object invisible over the range of frequencies of interest (i.e., at microwaves and far-infrared). The results obtained using our analytical model for mantle cloaks are validated against full-wave numerical simulations.
A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.
2017-12-01
A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.
A Randomized Study of How Physicians Interpret Research Funding Disclosures
Kesselheim, Aaron S.; Robertson, Christopher T.; Myers, Jessica A.; Rose, Susannah L.; Gillet, Victoria; Ross, Kathryn M.; Glynn, Robert J.; Joffe, Steven; Avorn, Jerry
2012-01-01
BACKGROUND The effects of clinical-trial funding on the interpretation of trial results are poorly understood. We examined how such support affects physicians’ reactions to trials with a high, medium, or low level of methodologic rigor. METHODS We presented 503 board-certified internists with abstracts that we designed describing clinical trials of three hypothetical drugs. The trials had high, medium, or low methodologic rigor, and each report included one of three support disclosures: funding from a pharmaceutical company, NIH funding, or none. For both factors studied (rigor and funding), one of the three possible variations was randomly selected for inclusion in the abstracts. Follow-up questions assessed the physicians’ impressions of the trials’ rigor, their confidence in the results, and their willingness to prescribe the drugs. RESULTS The 269 respondents (53.5% response rate) perceived the level of study rigor accurately. Physicians reported that they would be less willing to prescribe drugs tested in low-rigor trials than those tested in medium-rigor trials (odds ratio, 0.64; 95% confidence interval [CI], 0.46 to 0.89; P = 0.008) and would be more willing to prescribe drugs tested in high-rigor trials than those tested in medium-rigor trials (odds ratio, 3.07; 95% CI, 2.18 to 4.32; P<0.001). Disclosure of industry funding, as compared with no disclosure of funding, led physicians to downgrade the rigor of a trial (odds ratio, 0.63; 95% CI, 0.46 to 0.87; P = 0.006), their confidence in the results (odds ratio, 0.71; 95% CI, 0.51 to 0.98; P = 0.04), and their willingness to prescribe the hypothetical drugs (odds ratio, 0.68; 95% CI, 0.49 to 0.94; P = 0.02). Physicians were half as willing to prescribe drugs studied in industry-funded trials as they were to prescribe drugs studied in NIH-funded trials (odds ratio, 0.52; 95% CI, 0.37 to 0.71; P<0.001). These effects were consistent across all levels of methodologic rigor. CONCLUSIONS Physicians discriminate among trials of varying degrees of rigor, but industry sponsorship negatively influences their perception of methodologic quality and reduces their willingness to believe and act on trial findings, independently of the trial’s quality. These effects may influence the translation of clinical research into practice. PMID:22992075
How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.
Gray, Kurt
2017-09-01
Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.
Marko, Nicholas F.; Weil, Robert J.
2012-01-01
Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863
Upper and lower bounds for the speed of pulled fronts with a cut-off
NASA Astrophysics Data System (ADS)
Benguria, R. D.; Depassier, M. C.; Loss, M.
2008-02-01
We establish rigorous upper and lower bounds for the speed of pulled fronts with a cut-off. For all reaction terms of KPP type a simple analytic upper bound is given. The lower bounds however depend on details of the reaction term. For a small cut-off parameter the two leading order terms in the asymptotic expansion of the upper and lower bounds coincide and correspond to the Brunet-Derrida formula. For large cut-off parameters the bounds do not coincide and permit a simple estimation of the speed of the front.
Measuring the costs and benefits of conservation programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Einhorn, M.A.
1985-07-25
A step-by-step analysis of the effects of utility-sponsored conservation promoting programs begins by identifying several factors which will reduce a program's effectiveness. The framework for measuring cost savings and designing a conservation program needs to consider the size of appliance subsidies, what form incentives should take, and how will customer behavior change as a result of incentives. Continual reevaluation is necessary to determine whether to change the size of rebates or whether to continue the program. Analytical tools for making these determinations are improving as conceptual breakthroughs in econometrics permit more rigorous analysis. 5 figures.
THE OPTICS OF REFRACTIVE SUBSTRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Michael D.; Narayan, Ramesh, E-mail: mjohnson@cfa.harvard.edu
2016-08-01
Newly recognized effects of refractive scattering in the ionized interstellar medium have broad implications for very long baseline interferometry (VLBI) at extreme angular resolutions. Building upon work by Blandford and Narayan, we present a simplified, geometrical optics framework, which enables rapid, semi-analytic estimates of refractive scattering effects. We show that these estimates exactly reproduce previous results based on a more rigorous statistical formulation. We then derive new expressions for the scattering-induced fluctuations of VLBI observables such as closure phase, and we demonstrate how to calculate the fluctuations for arbitrary quantities of interest using a Monte Carlo technique.
NASA Astrophysics Data System (ADS)
Popa, Alexandru
1998-08-01
Recently we have demonstrated in a mathematical paper the following property: The energy which results from the Schrödinger equation can be rigorously calculated by line integrals of analytical functions, if the Hamilton-Jacobi equation, written for the same system, is satisfied in the space of coordinates by a periodical trajectory. We present now an accurate analysis model of the conservative discrete systems, that is based on this property. The theory is checked for a lot of atomic systems. The experimental data, which are ionization energies, are taken from well known books.
Analytical solutions with Generalized Impedance Boundary Conditions (GIBC)
NASA Technical Reports Server (NTRS)
Syed, H. H.; Volakis, John L.
1991-01-01
Rigorous uniform geometrical theory of diffraction (UTD) diffraction coefficients are presented for a coated convex cylinder simulated with generalized impedance boundary conditions. In particular, ray solutions are obtained which remain valid in the transition region and reduce uniformly to those in the deep lit and shadow regions. These involve new transition functions in place of the usual Fock-type integrals, characteristics to the impedance cylinder. A uniform asymptotic solution is also presented for observations in the close vicinity of the cylinder. The diffraction coefficients for the convex cylinder are obtained via a generalization of the corresponding ones for the circular cylinder.
The Independent Technical Analysis Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duberstein, Corey A.; Ham, Kenneth D.; Dauble, Dennis D.
2007-04-13
The Bonneville Power Administration (BPA) contracted with the Pacific Northwest National Laboratory (PNNL) to provide technical analytical support for system-wide fish passage information (BPA Project No. 2006-010-00). The goal of this project was to produce rigorous technical analysis products using independent analysts and anonymous peer reviewers. In the past, regional parties have interacted with a single entity, the Fish Passage Center to access the data, analyses, and coordination related to fish passage. This project provided an independent technical source for non-routine fish passage analyses while allowing routine support functions to be performed by other well-qualified entities.
The path integral on the pseudosphere
NASA Astrophysics Data System (ADS)
Grosche, C.; Steiner, F.
1988-02-01
A rigorous path integral treatment for the d-dimensional pseudosphere Λd-1 , a Riemannian manifold of constant negative curvature, is presented. The path integral formulation is based on a canonical approach using Weyl-ordering and the Hamiltonian path integral defined on midpoints. The time-dependent and energy-dependent Feynman kernels obtain different expressions in the even- and odd-dimensional cases, respectively. The special case of the three-dimensional pseudosphere, which is analytically equivalent to the Poincaré upper half plane, the Poincaré disc, and the hyperbolic strip, is discussed in detail including the energy spectrum and the normalised wave-functions.
Pham-Tuan, Hai; Kaskavelis, Lefteris; Daykin, Clare A; Janssen, Hans-Gerd
2003-06-15
"Metabonomics" has in the past decade demonstrated enormous potential in furthering the understanding of, for example, disease processes, toxicological mechanisms, and biomarker discovery. The same principles can also provide a systematic and comprehensive approach to the study of food ingredient impact on consumer health. However, "metabonomic" methodology requires the development of rapid, advanced analytical tools to comprehensively profile biofluid metabolites within consumers. Until now, NMR spectroscopy has been used for this purpose almost exclusively. Chromatographic techniques and in particular HPLC, have not been exploited accordingly. The main drawbacks of chromatography are the long analysis time, instabilities in the sample fingerprint and the rigorous sample preparation required. This contribution addresses these problems in the quest to develop generic methods for high-throughput profiling using HPLC. After a careful optimization process, stable fingerprints of biofluid samples can be obtained using standard HPLC equipment. A method using a short monolithic column and a rapid gradient with a high flow-rate has been developed that allowed rapid and detailed profiling of larger numbers of urine samples. The method can be easily translated into a slow, shallow-gradient high-resolution method for identification of interesting peaks by LC-MS/NMR. A similar approach has been applied for cell culture media samples. Due to the much higher protein content of such samples non-porous polymer-based small particle columns yielded the best results. The study clearly shows that HPLC can be used in metabonomic fingerprinting studies.
Design and analysis issues for economic analysis alongside clinical trials.
Marshall, Deborah A; Hux, Margaret
2009-07-01
Clinical trials can offer a valuable and efficient opportunity to collect the health resource use and outcomes data for economic evaluation. However, economic and clinical studies differ fundamentally in the question they seek to answer. The design and analysis of trial-based cost-effectiveness studies require special consideration, which are reviewed in this article. Traditional randomized controlled trials, using an experimental design with a controlled protocol, are designed to measure safety and efficacy for product registration. Cost-effectiveness analysis seeks to measure effectiveness in the context of routine clinical practice, and requires collection of health care resources to allow estimation of cost over an equal timeframe for each treatment alternative. In assessing suitability of a trial for economic data collection, the comparator treatment and other protocol factors need to reflect current clinical practice and the trial follow-up must be sufficiently long to capture important costs and effects. The broadest available population and a measure of effectiveness reflecting important benefits for patients are preferred for economic analyses. Special analytical issues include dealing with missing and censored cost data, assessing uncertainty of the incremental cost-effectiveness ratio, and accounting for the underlying heterogeneity in patient subgroups. Careful consideration also needs to be given to data from multinational studies since practice patterns can differ across countries. Although clinical trials can be an efficient opportunity to collect data for economic evaluation, careful consideration of the suitability of the study design, and appropriate analytical methods must be applied to obtain rigorous results.
Exact statistical results for binary mixing and reaction in variable density turbulence
NASA Astrophysics Data System (ADS)
Ristorcelli, J. R.
2017-02-01
We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ 2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ 2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).
Rigorous high-precision enclosures of fixed points and their invariant manifolds
NASA Astrophysics Data System (ADS)
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.
Sonoelasticity to monitor mechanical changes during rigor and ageing.
Ayadi, A; Culioli, J; Abouelkaram, S
2007-06-01
We propose the use of sonoelasticity as a non-destructive method to monitor changes in the resistance of muscle fibres, unaffected by connective tissue. Vibrations were applied at low frequency to induce oscillations in soft tissues and an ultrasound transducer was used to detect the motions. The experiments were carried out on the M. biceps femoris muscles of three beef cattle. In addition to the sonoelasticity measurements, the changes in meat during rigor and ageing were followed by measurements of both the mechanical resistance of myofibres and pH. The variations of mechanical resistance and pH were compared to those of the sonoelastic variables (velocity and attenuation) at two frequencies. The relationships between pH and velocity or attenuation and between the velocity or attenuation and the stress at 20% deformation were highly correlated. We concluded that sonoelasticity is a non-destructive method that can be used to monitor mechanical changes in muscle fibers during rigor-mortis and ageing.
Myers, Douglas J; Nyce, James M; Dekker, Sidney W A
2014-07-01
The concept of culture is now widely used by those who conduct research on safety and work-related injury outcomes. We argue that as the term has been applied by an increasingly diverse set of disciplines, its scope has broadened beyond how it was defined and intended for use by sociologists and anthropologists. As a result, this more inclusive concept has lost some of its precision and analytic power. We suggest that the utility of this "new" understanding of culture could be improved if researchers more clearly delineated the ideological - the socially constructed abstract systems of meaning, norms, beliefs and values (which we refer to as culture) - from concrete behaviors, social relations and other properties of workplaces (e.g., organizational structures) and of society itself. This may help researchers investigate how culture and social structures can affect safety and injury outcomes with increased analytic rigor. In addition, maintaining an analytical distinction between culture and other social factors can help intervention efforts better understand the target of the intervention and therefore may improve chances of both scientific and instrumental success. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ElNaggar, Mariam S; Barbier, Charlotte N; Van Berkel, Gary J
A coaxial geometry liquid microjunction surface sampling probe (LMJ-SSP) enables direct extraction of analytes from surfaces for subsequent analysis by techniques like mass spectrometry. Solution dynamics at the probe-to-sample surface interface in the LMJ-SSP has been suspected to influence sampling efficiency and dispersion but has not been rigorously investigated. The effect on flow dynamics and analyte transport to the mass spectrometer caused by coaxial retraction of the inner and outer capillaries from each other and the surface during sampling with a LMJ-SSP was investigated using computational fluid dynamics and experimentation. A transparent LMJ-SSP was constructed to provide the means formore » visual observation of the dynamics of the surface sampling process. Visual observation, computational fluid dynamics (CFD) analysis, and experimental results revealed that inner capillary axial retraction from the flush position relative to the outer capillary transitioned the probe from a continuous sampling and injection mode through an intermediate regime to sample plug formationmode caused by eddy currents at the sampling end of the probe. The potential for analytical implementation of these newly discovered probe operational modes is discussed.« less
Line-source excitation of realistic conformal metasurface cloaks
NASA Astrophysics Data System (ADS)
Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea
2012-11-01
Following our recently introduced analytical tools to model and design conformal mantle cloaks based on metasurfaces [Padooru et al., J. Appl. Phys. 112, 034907 (2012)], we investigate their performance and physical properties when excited by an electric line source placed in their close proximity. We consider metasurfaces formed by 2-D arrays of slotted (meshes and Jerusalem cross slots) and printed (patches and Jerusalem crosses) sub-wavelength elements. The electromagnetic scattering analysis is carried out using a rigorous analytical model, which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. It is shown that the homogenized grid-impedance expressions, originally derived for planar arrays of sub-wavelength elements and plane-wave excitation, may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks illuminated by near-field sources. Our closed-form analytical results are in good agreement with full-wave numerical simulations, up to sub-wavelength distances from the metasurface, confirming that mantle cloaks may be very effective to suppress the scattering of moderately sized objects, independent of the type of excitation and point of observation. We also discuss the dual functionality of these metasurfaces to boost radiation efficiency and directivity from confined near-field sources.
NASA Astrophysics Data System (ADS)
Crockett, Denise King
The purpose of the study is to show how science is defined and technology is selected in an Amish Mennonite (fundamentalist Christian) community and its school. Additionally, by examining this community, information is collected on how a fundamentalist school's treatment of and experience with science and technology compare to what has occurred over time in public schools in the United States. An ethnographic approach was used to recreate the shared beliefs, practices, artifacts, folk knowledge, and behaviors of this community. The ethnographic methodology allowed analytical descriptions and reconstructions of whole cultural scenes and groups of the community. Analysis of data followed an analytic induction method. The data collected included participant observation, documentation, photographs, formal interviews, informal interviews, audiotaping, journal entries, and artifacts. Findings indicate that science is wholly subsumed by Amish Mennonite religion. Using the transmission model, the Amish Mennonites teach science as a list of facts from the King James version of the Holy Bible. This method of teaching promotes community values and beliefs. The encouragement stands in sharp contrast to the Amish Mennonite school. Technology is seen as a tool for making the community prosper. For this community to sustain itself, economic stability must be maintained. Their economic stability is dependent on the outside community purchasing their goods and services; producing these goods and services requires use of appropriate technologies. In the United States public schools, science is encouraged to be taught a way of knowing that implies a critical view about how the world works. In addition, public schools promote new and innovative technologies. Thus, they become fertile soil for developing new concepts about implementing scientific ideas and using technology. For the Amish Mennonites, rigorous standards, such as the scientific method, as addressed in the public school do not exist. In contrast, critical analysis of any new technology is always used in this community.
Brancu, Mira; Mann-Wrobel, Monica; Beckham, Jean C; Wagner, H Ryan; Elliott, Alyssa; Robbins, Allison T; Wong, Madrianne; Berchuck, Ania E; Runnals, Jennifer J
2016-03-01
Subthreshold posttraumatic stress disorder (PTSD) is a chronic condition that is often ignored, the cumulative effects of which can negatively impact an individual's quality of life and overall health care costs. However, subthreshold PTSD prevalence rates and impairment remain unclear due to variations in research methodology. This study examined the existing literature in order to recommend approaches to standardize subthreshold PTSD assessment. We conducted (a) a meta-analysis of subthreshold PTSD prevalence rates and (b) compared functional impairment associated with the 3 most commonly studied subthreshold PTSD definitions. Meta-analytic results revealed that the average prevalence rate of subthreshold PTSD across studies was 14.7%, with a lower rate (12.6%) among the most methodologically rigorous studies and higher rate (15.6%) across less rigorous studies. There were significant methodological differences among reviewed studies with regard to definition, measurement, and population. Different definitions led to prevalence rates ranging between 13.7% and 16.4%. Variability in prevalence rates most related to population and sample composition, with trauma type and community (vs. epidemiological) samples significantly impacting heterogeneity. Qualitative information gathered from studies presenting functional correlates supported current evidence that psychological and behavioral parameters were worse among subthreshold PTSD groups compared with no-PTSD groups, but not as severe as impairment in PTSD groups. Several studies also reported significant increased risk of suicidality and hopelessness as well as higher health care utilization rates among those with subthreshold PTSD (compared with trauma exposed no-PTSD samples). Based on findings, we propose recommendations for developing a standard approach to evaluation of subthreshold PTSD. (c) 2016 APA, all rights reserved).
The long-time dynamics of two hydrodynamically-coupled swimming cells.
Michelin, Sébastien; Lauga, Eric
2010-05-01
Swimming microorganisms such as bacteria or spermatozoa are typically found in dense suspensions, and exhibit collective modes of locomotion qualitatively different from that displayed by isolated cells. In the dilute limit where fluid-mediated interactions can be treated rigorously, the long-time hydrodynamics of a collection of cells result from interactions with many other cells, and as such typically eludes an analytical approach. Here, we consider the only case where such problem can be treated rigorously analytically, namely when the cells have spatially confined trajectories, such as the spermatozoa of some marine invertebrates. We consider two spherical cells swimming, when isolated, with arbitrary circular trajectories, and derive the long-time kinematics of their relative locomotion. We show that in the dilute limit where the cells are much further away than their size, and the size of their circular motion, a separation of time scale occurs between a fast (intrinsic) swimming time, and a slow time where hydrodynamic interactions lead to change in the relative position and orientation of the swimmers. We perform a multiple-scale analysis and derive the effective dynamical system--of dimension two--describing the long-time behavior of the pair of cells. We show that the system displays one type of equilibrium, and two types of rotational equilibrium, all of which are found to be unstable. A detailed mathematical analysis of the dynamical systems further allows us to show that only two cell-cell behaviors are possible in the limit of t-->infinity, either the cells are attracted to each other (possibly monotonically), or they are repelled (possibly monotonically as well), which we confirm with numerical computations. Our analysis shows therefore that, even in the dilute limit, hydrodynamic interactions lead to new modes of cell-cell locomotion.
Putting climate impact estimates to work: the empirical approach of the American Climate Prospectus
NASA Astrophysics Data System (ADS)
Jina, A.; Hsiang, S. M.; Kopp, R. E., III; Rasmussen, D.; Rising, J.
2014-12-01
The American Climate Prospectus (ACP), the technical analysis underlying the Risky Business project, quantitatively assesses climate risks posed to the United States' economy in a number of sectors [1]. Four of these - crop yield, crime, labor productivity, and mortality - draw upon research which identifies social impacts using contemporary variability in climate. We first identify a group of rigorous studies that use climate variability to identify responses to temperature and precipitation, while controlling for unobserved differences between locations. To incorporate multiple studies from a single sector, we employ a meta-analytical approach that draws on Bayesian methods commonly used in medical research and previously implemented in [2]. We generate a series of aggregate response functions for each sector using this meta-analytical method. We combine response functions with downscaled physical climate projections to estimate climate impacts out to the end of the century, incorporating uncertainty from statistical estimates, weather, climate models, and different emissions scenarios. Incorporating multiple studies in a single estimation framework allows us to directly compare impacts across the economy. We find that increased mortality has the largest effect on the US economy, followed by costs associated with decreased labor productivity. Agricultural losses and increases in crime contribute lesser but nonetheless substantial costs, and agriculture, notably, shows many areas benefitting from projected climate changes. The ACP also presents results throughout the 21stcentury. The dynamics of each of the impact categories differs, with, for example, mortality showing little change until the end of the century, but crime showing a monotonic increase from the present day. The ACP approach can expand to include new findings in current sectors, new sectors, and new geographical areas of interest. It represents an analytical framework that can incorporate empirical studies into a broad characterization of climate impacts across an economy, ensuring that each individual study can contribute to guiding policy priorities on climate change. References: [1] T. Houser et al. (2014), American Climate Prospectus, www.climateprospectus.org. [2] Hsiang, Burke, and Miguel (2013), Science.
Wu, Zheyang; Zhao, Hongyu
2012-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies.
Wu, Zheyang; Zhao, Hongyu
2013-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies. PMID:23956610
NASA Astrophysics Data System (ADS)
Boss, Alan P.
2009-03-01
The disk instability mechanism for giant planet formation is based on the formation of clumps in a marginally gravitationally unstable protoplanetary disk, which must lose thermal energy through a combination of convection and radiative cooling if they are to survive and contract to become giant protoplanets. While there is good observational support for forming at least some giant planets by disk instability, the mechanism has become theoretically contentious, with different three-dimensional radiative hydrodynamics codes often yielding different results. Rigorous code testing is required to make further progress. Here we present two new analytical solutions for radiative transfer in spherical coordinates, suitable for testing the code employed in all of the Boss disk instability calculations. The testing shows that the Boss code radiative transfer routines do an excellent job of relaxing to and maintaining the analytical results for the radial temperature and radiative flux profiles for a spherical cloud with high or moderate optical depths, including the transition from optically thick to optically thin regions. These radial test results are independent of whether the Eddington approximation, diffusion approximation, or flux-limited diffusion approximation routines are employed. The Boss code does an equally excellent job of relaxing to and maintaining the analytical results for the vertical (θ) temperature and radiative flux profiles for a disk with a height proportional to the radial distance. These tests strongly support the disk instability mechanism for forming giant planets.
Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand
2018-01-01
The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.
Asoubar, Daniel; Wyrowski, Frank
2015-07-27
The computer-aided design of high quality mono-mode, continuous-wave solid-state lasers requires fast, flexible and accurate simulation algorithms. Therefore in this work a model for the calculation of the transversal dominant mode structure is introduced. It is based on the generalization of the scalar Fox and Li algorithm to a fully-vectorial light representation. To provide a flexible modeling concept of different resonator geometries containing various optical elements, rigorous and approximative solutions of Maxwell's equations are combined in different subdomains of the resonator. This approach allows the simulation of plenty of different passive intracavity components as well as active media. For the numerically efficient simulation of nonlinear gain, thermal lensing and stress-induced birefringence effects in solid-state active crystals a semi-analytical vectorial beam propagation method is discussed in detail. As a numerical example the beam quality and output power of a flash-lamp-pumped Nd:YAG laser are improved. To that end we compensate the influence of stress-induced birefringence and thermal lensing by an aspherical mirror and a 90° quartz polarization rotator.
Maximizing kinetic energy transfer in one-dimensional many-body collisions
NASA Astrophysics Data System (ADS)
Ricardo, Bernard; Lee, Paul
2015-03-01
The main problem discussed in this paper involves a simple one-dimensional two-body collision, in which the problem can be extended into a chain of one-dimensional many-body collisions. The result is quite interesting, as it provides us with a thorough mathematical understanding that will help in designing a chain system for maximum energy transfer for a range of collision types. In this paper, we will show that there is a way to improve the kinetic energy transfer between two masses, and the idea can be applied recursively. However, this method only works for a certain range of collision types, which is indicated by a range of coefficients of restitution. Although the concept of momentum, elastic and inelastic collision, as well as Newton’s laws, are taught in junior college physics, especially in Singapore schools, students in this level are not expected to be able to do this problem quantitatively, as it requires rigorous mathematics, including calculus. Nevertheless, this paper provides nice analytical steps that address some common misconceptions in students’ way of thinking about one-dimensional collisions.
Weuve, Jennifer; Proust-Lima, Cécile; Power, Melinda C; Gross, Alden L; Hofer, Scott M; Thiébaut, Rodolphe; Chêne, Geneviève; Glymour, M Maria; Dufouil, Carole
2015-09-01
Clinical and population research on dementia and related neurologic conditions, including Alzheimer's disease, faces several unique methodological challenges. Progress to identify preventive and therapeutic strategies rests on valid and rigorous analytic approaches, but the research literature reflects little consensus on "best practices." We present findings from a large scientific working group on research methods for clinical and population studies of dementia, which identified five categories of methodological challenges as follows: (1) attrition/sample selection, including selective survival; (2) measurement, including uncertainty in diagnostic criteria, measurement error in neuropsychological assessments, and practice or retest effects; (3) specification of longitudinal models when participants are followed for months, years, or even decades; (4) time-varying measurements; and (5) high-dimensional data. We explain why each challenge is important in dementia research and how it could compromise the translation of research findings into effective prevention or care strategies. We advance a checklist of potential sources of bias that should be routinely addressed when reporting dementia research. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Overview of Aro Program on Network Science for Human Decision Making
NASA Astrophysics Data System (ADS)
West, Bruce J.
This program brings together researchers from disparate disciplines to work on a complex research problem that defies confinement within any single discipline. Consequently, not only are new and rewarding solutions sought and obtained for a problem of importance to society and the Army, that is, the human dimension of complex networks, but, in addition, collaborations are established that would not otherwise have formed given the traditional disciplinary compartmentalization of research. This program develops the basic research foundation of a science of networks supporting the linkage between the physical and human (cognitive and social) domains as they relate to human decision making. The strategy is to extend the recent methods of non-equilibrium statistical physics to non-stationary, renewal stochastic processes that appear to be characteristic of the interactions among nodes in complex networks. We also pursue understanding of the phenomenon of synchronization, whose mathematical formulation has recently provided insight into how complex networks reach accommodation and cooperation. The theoretical analyses of complex networks, although mathematically rigorous, often elude analytic solutions and require computer simulation and computation to analyze the underlying dynamic process.
The Visual Representation of 3D Object Orientation in Parietal Cortex
Cowan, Noah J.; Angelaki, Dora E.
2013-01-01
An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830
Translational medicine in the Age of Big Data.
Tatonetti, Nicholas P
2017-10-12
The ability to collect, store and analyze massive amounts of molecular and clinical data is fundamentally transforming the scientific method and its application in translational medicine. Collecting observations has always been a prerequisite for discovery, and great leaps in scientific understanding are accompanied by an expansion of this ability. Particle physics, astronomy and climate science, for example, have all greatly benefited from the development of new technologies enabling the collection of larger and more diverse data. Unlike medicine, however, each of these fields also has a mature theoretical framework on which new data can be evaluated and incorporated-to say it another way, there are no 'first principals' from which a healthy human could be analytically derived. The worry, and it is a valid concern, is that, without a strong theoretical underpinning, the inundation of data will cause medical research to devolve into a haphazard enterprise without discipline or rigor. The Age of Big Data harbors tremendous opportunity for biomedical advances, but will also be treacherous and demanding on future scientists. © The Author 2017. Published by Oxford University Press.
Facile fabrication of CNT-based chemical sensor operating at room temperature
NASA Astrophysics Data System (ADS)
Sheng, Jiadong; Zeng, Xian; Zhu, Qi; Yang, Zhaohui; Zhang, Xiaohua
2017-12-01
This paper describes a simple, low cost and effective route to fabricate CNT-based chemical sensors, which operate at room temperature. Firstly, the incorporation of silk fibroin in vertically aligned CNT arrays (CNTA) obtained through a thermal chemical vapor deposition (CVD) method makes the direct removal of CNT arrays from substrates without any rigorous acid or sonication treatment feasible. Through a simple one-step in situ polymerization of anilines, the functionalization of CNT arrays with polyaniline (PANI) significantly improves the sensing performance of CNT-based chemical sensors in detecting ammonia (NH3) and hydrogen chloride (HCl) vapors. Chemically modified CNT arrays also show responses to organic vapors like menthol, ethyl acetate and acetone. Although the detection limits of chemically modified CNT-based chemical sensors are of the same orders of magnitudes reported in previous studies, these CNT-based chemical sensors show advantages of simplicity, low cost and energy efficiency in preparation and fabrication of devices. Additionally, a linear relationship between the relative sensitivity and concentration of analyte makes precise estimations on the concentrations of trace chemical vapors possible.
NASA Astrophysics Data System (ADS)
Lee, Hyomin; Jung, Yeonsu; Park, Sungmin; Kim, Ho-Young; Kim, Sung Jae
2016-11-01
Generally, an ion depletion region near a permselective medium is induced by predominant ion flux through the medium. External electric field or hydraulic pressure has been reported as the driving forces. Among these driving forces, an imbibition through the nanoporous medium was chosen as the mechanism to spontaneously generate the ion depletion region. The water-absorbing process leads to the predominant ion flux so that the spontaneous formation of the ion depletion zone is expected even if there are no additional driving forces except for the inherent capillary action. In this presentation, we derived the analytical solutions using perturbation method and asymptotic analysis for the spontaneous phenomenon. Using the analysis, we found that there is also spontaneous accumulation regime depending on the mobility of dissolved electrolytic species. Therefore, the rigorous analysis of the spontaneous ion depletion and accumulation phenomena would provide a key perspective for the control of ion transportation in nanofluidic system such as desalinator, preconcentrator, and energy harvesting device, etc. Samsung Research Funding Center of Samsung Electronics (SRFC-MA1301-02) and BK21 plus program of Creative Research Engineer Development IT, Seoul National University.
5D-Tracking of a nanorod in a focused laser beam--a theoretical concept.
Griesshammer, Markus; Rohrbach, Alexander
2014-03-10
Back-focal plane (BFP) interferometry is a very fast and precise method to track the 3D position of a sphere within a focused laser beam using a simple quadrant photo diode (QPD). Here we present a concept of how to track and recover the 5D state of a cylindrical nanorod (3D position and 2 tilt angles) in a laser focus by analyzing the interference of unscattered light and light scattered at the cylinder. The analytical theoretical approach is based on Rayleigh-Gans scattering together with a local field approximation for an infinitely thin cylinder. The approximated BFP intensities compare well with those from a more rigorous numerical approach. It turns out that a displacement of the cylinder results in a modulation of the BFP intensity pattern, whereas a tilt of the cylinder results in a shift of this pattern. We therefore propose the concept of a local QPD in the BFP of a detection lens, where the QPD center is shifted by the angular coordinates of the cylinder tilt.
Bamberger, Michael; Tarsilla, Michele; Hesse-Biber, Sharlene
2016-04-01
Many widely-used impact evaluation designs, including randomized control trials (RCTs) and quasi-experimental designs (QEDs), frequently fail to detect what are often quite serious unintended consequences of development programs. This seems surprising as experienced planners and evaluators are well aware that unintended consequences frequently occur. Most evaluation designs are intended to determine whether there is credible evidence (statistical, theory-based or narrative) that programs have achieved their intended objectives and the logic of many evaluation designs, even those that are considered the most "rigorous," does not permit the identification of outcomes that were not specified in the program design. We take the example of RCTs as they are considered by many to be the most rigorous evaluation designs. We present a numbers of cases to illustrate how infusing RCTs with a mixed-methods approach (sometimes called an "RCT+" design) can strengthen the credibility of these designs and can also capture important unintended consequences. We provide a Mixed Methods Evaluation Framework that identifies 9 ways in which UCs can occur, and we apply this framework to two of the case studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Goldie, Sue
2006-11-01
Cervical cancer remains a leading cause of cancer death among women living in low-resource settings. In the last 3 decades, cytologic screening has -in theory -been available and yet more than 6 million women have died of this preventable disease. The necessary resources, infrastructure, and technological expertise, together with the need for repeated screenings at regular intervals, make cytologic screening difficult to implement in poor countries. As noncytologic approaches for the detection of HPV, simple visual screening methods for anogenital lesions caused by HPV, and the availability of an HPV-16/18 vaccine will enhance the linkage between screening and treatment, multiple factors will need to be considered when designing new, or modifying existing prevention strategies. Countryspecific decisions regarding the best strategy for cervical cancer control will need to rely on data from many sources and take into account complex epidemiologic, economic, social, political, and cultural factors, and be made despite uncertainty and incomplete information. A rigorous decision analytic approach using computerbased modeling methods enables linkage of the knowledge gained from empirical studies to real-world situations. This chapter provides an introduction to these methods, reviews lessons learned from cost-effectiveness analyses of cervical cancer screening in developed and developing countries, and emphasizes important qualitative themes to consider in designing cervical cancer prevention policies. © 2006 International Federation of Gynecology and Obstetrics.
Cordero, Chiara; Canale, Francesca; Del Rio, Daniele; Bicchi, Carlo
2009-11-01
The present study is focused on flavan-3-ols characterizing the antioxidant properties of fermented tea (Camellia sinensis). These bioactive compounds, object of nutritional claims in commercial products, should be quantified with rigorous analytical procedures whose accuracy and precision have been stated with a certain level of confidence. An HPLC-UV/DAD method, able to detect and quantify flavan-3-ols in infusions and ready-to-drink teas, has been developed for routine analysis and validated by characterizing several performance parameters. The accuracy assessment has been run through a series of LC-MS/MS analyses. Epigallocatechin, (+)-catechin, (-)-epigallocatechingallate, (-)-epicatechin, (-)-gallocatechingallate, (-)-epicatechingallate, and (-)-catechingallate were chosen as markers of the polyphenolic fraction. Quantitative results showed that samples obtained from tea leaves infusion were richer in polyphenolic antioxidants than those obtained through other industrial processes. The influence of shelf-life and packaging material on the flavan-3-ols content was also considered; markers decreased, with an exponential trend, as a function of time within the shelf life while packaging materials demonstrated to influence differently the flavan-3-ol fraction composition over time. The method presented here provides quantitative results with a certain level of confidence and is suitable for a routine quality control of iced teas whose antioxidant properties are object of nutritional claim.
NASA Astrophysics Data System (ADS)
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
Cavitt, L C; Sams, A R
2003-07-01
Studies were conducted to develop a non-destructive method for monitoring the rate of rigor mortis development in poultry and to evaluate the effectiveness of electrical stimulation (ES). In the first study, 36 male broilers in each of two trials were processed at 7 wk of age. After being bled, half of the birds received electrical stimulation (400 to 450 V, 400 to 450 mA, for seven pulses of 2 s on and 1 s off), and the other half were designated as controls. At 0.25 and 1.5 h postmortem (PM), carcasses were evaluated for the angles of the shoulder, elbow, and wing tip and the distance between the elbows. Breast fillets were harvested at 1.5 h PM (after chilling) from all carcasses. Fillet samples were excised and frozen for later measurement of pH and R-value, and the remainder of each fillet was held on ice until 24 h postmortem. Shear value and pH means were significantly lower, but R-value means were higher (P < 0.05) for the ES fillets compared to the controls, suggesting acceleration of rigor mortis by ES. The physical dimensions of the shoulder and elbow changed (P < 0.05) during rigor mortis development and with ES. These results indicate that physical measurements of the wings maybe useful as a nondestructive indicator of rigor development and for monitoring the effectiveness of ES. In the second study, 60 male broilers in each of two trials were processed at 7 wk of age. At 0.25, 1.5, 3.0, and 6.0 h PM, carcasses were evaluated for the distance between the elbows. At each time point, breast fillets were harvested from each carcass. Fillet samples were excised and frozen for later measurement of pH and sacromere length, whereas the remainder of each fillet was held on ice until 24 h PM. Shear value and pH means (P < 0.05) decreased, whereas sarcomere length means (P < 0.05) increased over time, indicating rigor mortis development. Elbow distance decreased (P < 0.05) with rigor development and was correlated (P < 0.01) with shear value (r = 0.2581), sarcomere length (r = -0.3079), and pH (r = 0.6303). These results suggest that elbow distance could be used in conjunction with other detection methods for optically automating measurement of rigor mortis development in broiler carcasses.
Can power-law scaling and neuronal avalanches arise from stochastic dynamics?
Touboul, Jonathan; Destexhe, Alain
2010-02-11
The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.
Qian, Ma; Ma, Jie
2009-06-07
Fletcher's spherical substrate model [J. Chem. Phys. 29, 572 (1958)] is a basic model for understanding the heterogeneous nucleation phenomena in nature. However, a rigorous thermodynamic formulation of the model has been missing due to the significant complexities involved. This has not only left the classical model deficient but also likely obscured its other important features, which would otherwise have helped to better understand and control heterogeneous nucleation on spherical substrates. This work presents a rigorous thermodynamic formulation of Fletcher's model using a novel analytical approach and discusses the new perspectives derived. In particular, it is shown that the use of an intermediate variable, a selected geometrical angle or pseudocontact angle between the embryo and spherical substrate, revealed extraordinary similarities between the first derivatives of the free energy change with respect to embryo radius for nucleation on spherical and flat substrates. Enlightened by the discovery, it was found that there exists a local maximum in the difference between the equivalent contact angles for nucleation on spherical and flat substrates due to the existence of a local maximum in the difference between the shape factors for nucleation on spherical and flat substrate surfaces. This helps to understand the complexity of the heterogeneous nucleation phenomena in a practical system. Also, it was found that the unfavorable size effect occurs primarily when R<5r( *) (R: radius of substrate and r( *): critical embryo radius) and diminishes rapidly with increasing value of R/r( *) beyond R/r( *)=5. This finding provides a baseline for controlling the size effects in heterogeneous nucleation.
A rigorous and simpler method of image charges
NASA Astrophysics Data System (ADS)
Ladera, C. L.; Donoso, G.
2016-07-01
The method of image charges relies on the proven uniqueness of the solution of the Laplace differential equation for an electrostatic potential which satisfies some specified boundary conditions. Granted by that uniqueness, the method of images is rightly described as nothing but shrewdly guessing which and where image charges are to be placed to solve the given electrostatics problem. Here we present an alternative image charges method that is based not on guessing but on rigorous and simpler theoretical grounds, namely the constant potential inside any conductor and the application of powerful geometric symmetries. The aforementioned required uniqueness and, more importantly, guessing are therefore both altogether dispensed with. Our two new theoretical fundaments also allow the image charges method to be introduced in earlier physics courses for engineering and sciences students, instead of its present and usual introduction in electromagnetic theory courses that demand familiarity with the Laplace differential equation and its boundary conditions.
NASA Astrophysics Data System (ADS)
Gillam, Thomas P. S.; Lester, Christopher G.
2014-11-01
We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic "matrix method" for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.
Shaver, Aaron C; Greig, Bruce W; Mosse, Claudio A; Seegmiller, Adam C
2015-05-01
Optimizing a clinical flow cytometry panel can be a subjective process dependent on experience. We develop a quantitative method to make this process more rigorous and apply it to B lymphoblastic leukemia/lymphoma (B-ALL) minimal residual disease (MRD) testing. We retrospectively analyzed our existing three-tube, seven-color B-ALL MRD panel and used our novel method to develop an optimized one-tube, eight-color panel, which was tested prospectively. The optimized one-tube, eight-color panel resulted in greater efficiency of time and resources with no loss in diagnostic power. Constructing a flow cytometry panel using a rigorous, objective, quantitative method permits optimization and avoids problems of interdependence and redundancy in a large, multiantigen panel. Copyright© by the American Society for Clinical Pathology.
Perspective: Randomized Controlled Trials Are Not a Panacea for Diet-Related Research12
Hébert, James R; Frongillo, Edward A; Adams, Swann A; Turner-McGrievy, Gabrielle M; Hurley, Thomas G; Miller, Donald R; Ockene, Ira S
2016-01-01
Research into the role of diet in health faces a number of methodologic challenges in the choice of study design, measurement methods, and analytic options. Heavier reliance on randomized controlled trial (RCT) designs is suggested as a way to solve these challenges. We present and discuss 7 inherent and practical considerations with special relevance to RCTs designed to study diet: 1) the need for narrow focus; 2) the choice of subjects and exposures; 3) blinding of the intervention; 4) perceived asymmetry of treatment in relation to need; 5) temporal relations between dietary exposures and putative outcomes; 6) strict adherence to the intervention protocol, despite potential clinical counter-indications; and 7) the need to maintain methodologic rigor, including measuring diet carefully and frequently. Alternatives, including observational studies and adaptive intervention designs, are presented and discussed. Given high noise-to-signal ratios interjected by using inaccurate assessment methods in studies with weak or inappropriate study designs (including RCTs), it is conceivable and indeed likely that effects of diet are underestimated. No matter which designs are used, studies will require continued improvement in the assessment of dietary intake. As technology continues to improve, there is potential for enhanced accuracy and reduced user burden of dietary assessments that are applicable to a wide variety of study designs, including RCTs. PMID:27184269
NASA Astrophysics Data System (ADS)
Galyamin, S. N.; Tyukhtin, A. V.; Vorobev, V. V.; Aryshev, A.
2018-02-01
We consider a point charge and Gaussian bunch of charged particles moving along the axis of a circular perfectly conducting pipe with uniform dielectric filling and open end. It is supposed that this semi-infinite waveguide is located in collinear infinite vacuum pipe with perfectly conducting walls and larger diameter. We deal with two cases corresponding to the open end of the inner waveguide with and without flange. Radiation produced by a charge or bunch flying from dielectric part to wide vacuum part is analyzed. We use modified residue-calculus technique and construct rigorous analytical theory describing scattered field in each sub-area of the structure. Cherenkov radiation generated in the dielectric waveguide and penetrating into the vacuum regions of the structure is of main interest throughout the present paper. We show that this part of radiation can be easily analyzed using the presented formalism. We also perform numerical simulation in CST PS code and verify the analytical results.
Network as transconcept: elements for a conceptual demarcation in the field of public health
Amaral, Carlos Eduardo Menezes; Bosi, Maria Lúcia Magalhães
2016-01-01
ABSTRACT The main proposal to set up an articulated mode of operation of health services has been the concept of network, which has been appropriated in different ways in the field of public health, as it is used in other disciplinary fields or even taking it from common sense. Amid the diversity of uses and concepts, we recognize the need for rigorous conceptual demarcation about networks in the field of health. Such concern aims to preserve the strategic potential of this concept in the research and planning in the field, overcoming uncertainties and distortions still observed in its discourse-analytic circulation in public health. To this end, we will introduce the current uses of network in different disciplinary fields, emphasizing dialogues with the field of public health. With this, we intend to stimulate discussions about the development of empirical dimensions and analytical models that may allow us to understand the processes produced within and around health networks. PMID:27556965
Pesce, Michael A.; Strauss, Shiela M.; Rosedale, Mary; Netterwald, Jane; Wang, Hangli
2016-01-01
Objectives To validate an ion exchange high-pressure liquid chromatography (HPLC) method for measuring glycated hemoglobin (HbA1c) in gingival crevicular blood (GCB) spotted on filter paper, for use in screening dental patients for diabetes. Methods We collected the GCB specimens for this study from the oral cavities of patients during dental visits, using rigorous strategies to obtain GCB that was as free of debris as possible. The analytical performance of the HPLC method was determined by measuring the precision, linearity, carryover, stability of HbA1c in GCB, and correlation of HbA1c results in GCB specimens with finger-stick blood (FSB) specimens spotted on filter paper. Results The coefficients of variation (CVs) for the inter- and intrarun precision of the method were less than 2.0%. Linearity ranged between 4.2% and 12.4%; carryover was less than 2.0%, and the stability of the specimen was 6 days at 4°C and as many as 14 days at −70°C. Linear regression analysis comparing the HbA1c results in GCB with FSB yielded a correlation coefficient of 0.993, a slope of 0.981, and an intercept of 0.13. The Bland-Altman plot showed no difference in the HbA1c results from the GCB and FSB specimens at normal, prediabetes, and diabetes HbA1c levels. Conclusion We validated an HPLC method for measuring HbA1c in GCB; this method can be used to screen dental patients for diabetes. PMID:26489673
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sidler, Rolf, E-mail: rsidler@gmail.com; Carcione, José M.; Holliger, Klaus
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in themore » radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.« less
Predicting tensorial electrophoretic effects in asymmetric colloids
NASA Astrophysics Data System (ADS)
Mowitz, Aaron J.; Witten, T. A.
2017-12-01
We formulate a numerical method for predicting the tensorial linear response of a rigid, asymmetrically charged body to an applied electric field. This prediction requires calculating the response of the fluid to the Stokes drag forces on the moving body and on the countercharges near its surface. To determine the fluid's motion, we represent both the body and the countercharges using many point sources of drag known as Stokeslets. Finding the correct flow field amounts to finding the set of drag forces on the Stokeslets that is consistent with the relative velocities experienced by each Stokeslet. The method rigorously satisfies the condition that the object moves with no transfer of momentum to the fluid. We demonstrate that a sphere represented by 1999 well-separated Stokeslets on its surface produces flow and drag force like a solid sphere to 1% accuracy. We show that a uniformly charged sphere with 3998 body and countercharge Stokeslets obeys the Smoluchowski prediction [F. Morrison, J. Colloid Interface Sci. 34, 210 (1970), 10.1016/0021-9797(70)90171-2] for electrophoretic mobility when the countercharges lie close to the sphere. Spheres with dipolar and quadrupolar charge distributions rotate and translate as predicted analytically to 4% accuracy or better. We describe how the method can treat general asymmetric shapes and charge distributions. This method offers promise as a way to characterize and manipulate asymmetrically charged colloid-scale objects from biology (e.g., viruses) and technology (e.g., self-assembled clusters).
A Mathematical Account of the NEGF Formalism
NASA Astrophysics Data System (ADS)
Cornean, Horia D.; Moldoveanu, Valeriu; Pillet, Claude-Alain
2018-02-01
The main goal of this paper is to put on solid mathematical grounds the so-called Non-Equilibrium Green's Function (NEGF) transport formalism for open systems. In particular, we derive the Jauho-Meir-Wingreen formula for the time-dependent current through an interacting sample coupled to non-interacting leads. Our proof is non-perturbative and uses neither complex-time Keldysh contours, nor Langreth rules of 'analytic continuation'. We also discuss other technical identities (Langreth, Keldysh) involving various many body Green's functions. Finally, we study the Dyson equation for the advanced/retarded interacting Green's function and we rigorously construct its (irreducible) self-energy, using the theory of Volterra operators.
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
Deffuant model of opinion formation in one-dimensional multiplex networks
NASA Astrophysics Data System (ADS)
Shang, Yilun
2015-10-01
Complex systems in the real world often operate through multiple kinds of links connecting their constituents. In this paper we propose an opinion formation model under bounded confidence over multiplex networks, consisting of edges at different topological and temporal scales. We determine rigorously the critical confidence threshold by exploiting probability theory and network science when the nodes are arranged on the integers, {{Z}}, evolving in continuous time. It is found that the existence of ‘multiplexity’ impedes the convergence, and that working with the aggregated or summarized simplex network is inaccurate since it misses vital information. Analytical calculations are confirmed by extensive numerical simulations.
Cost analysis of advanced turbine blade manufacturing processes
NASA Technical Reports Server (NTRS)
Barth, C. F.; Blake, D. E.; Stelson, T. S.
1977-01-01
A rigorous analysis was conducted to estimate relative manufacturing costs for high technology gas turbine blades prepared by three candidate materials process systems. The manufacturing costs for the same turbine blade configuration of directionally solidified eutectic alloy, an oxide dispersion strengthened superalloy, and a fiber reinforced superalloy were compared on a relative basis to the costs of the same blade currently in production utilizing the directional solidification process. An analytical process cost model was developed to quantitatively perform the cost comparisons. The impact of individual process yield factors on costs was also assessed as well as effects of process parameters, raw materials, labor rates and consumable items.
Experimental Observation and Theoretical Description of Multisoliton Fission in Shallow Water
NASA Astrophysics Data System (ADS)
Trillo, S.; Deng, G.; Biondini, G.; Klein, M.; Clauss, G. F.; Chabchoub, A.; Onorato, M.
2016-09-01
We observe the dispersive breaking of cosine-type long waves [Phys. Rev. Lett. 15, 240 (1965)] in shallow water, characterizing the highly nonlinear "multisoliton" fission over variable conditions. We provide new insight into the interpretation of the results by analyzing the data in terms of the periodic inverse scattering transform for the Korteweg-de Vries equation. In a wide range of dispersion and nonlinearity, the data compare favorably with our analytical estimate, based on a rigorous WKB approach, of the number of emerging solitons. We are also able to observe experimentally the universal Fermi-Pasta-Ulam recurrence in the regime of moderately weak dispersion.
NASA Astrophysics Data System (ADS)
Kryjevskaia, Mila; Stetzer, MacKenzie R.; Grosz, Nathaniel
2014-12-01
We have applied the heuristic-analytic theory of reasoning to interpret inconsistencies in student reasoning approaches to physics problems. This study was motivated by an emerging body of evidence that suggests that student conceptual and reasoning competence demonstrated on one task often fails to be exhibited on another. Indeed, even after instruction specifically designed to address student conceptual and reasoning difficulties identified by rigorous research, many undergraduate physics students fail to build reasoning chains from fundamental principles even though they possess the required knowledge and skills to do so. Instead, they often rely on a variety of intuitive reasoning strategies. In this study, we developed and employed a methodology that allowed for the disentanglement of student conceptual understanding and reasoning approaches through the use of sequences of related questions. We have shown that the heuristic-analytic theory of reasoning can be used to account for, in a mechanistic fashion, the observed inconsistencies in student responses. In particular, we found that students tended to apply their correct ideas in a selective manner that supported a specific and likely anticipated conclusion while neglecting to employ the same ideas to refute an erroneous intuitive conclusion. The observed reasoning patterns were consistent with the heuristic-analytic theory, according to which reasoners develop a "first-impression" mental model and then construct an argument in support of the answer suggested by this model. We discuss implications for instruction and argue that efforts to improve student metacognition, which serves to regulate the interaction between intuitive and analytical reasoning, is likely to lead to improved student reasoning.
The analytical validation of the Oncotype DX Recurrence Score assay
Baehner, Frederick L
2016-01-01
In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940
The analytical validation of the Oncotype DX Recurrence Score assay.
Baehner, Frederick L
2016-01-01
In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX ® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score ® result (scale: 0-100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time.
Harris, Sarah Parker; Gould, Robert; Fujiura, Glenn
2015-01-01
There is increasing theoretical consideration about the use of systematic and scoping reviews of evidence in informing disability and rehabilitation research and practice. Indicative of this trend, this journal published a piece by Rumrill, Fitzgerald and Merchant in 2010 explaining the utility and process for conducting reviews of intervention-based research. There is still need to consider how to apply such rigor when conducting more exploratory reviews of heterogeneous research. This article explores the challenges, benefits, and procedures for conducting rigorous exploratory scoping reviews of diverse evidence. The article expands upon Rumrill, Fitzgerald and Merchant's framework and considers its application to more heterogeneous evidence on the impact of social policy. A worked example of a scoping review of the Americans with Disabilities Act is provided with a procedural framework for conducting scoping reviews on the effects of a social policy. The need for more nuanced techniques for enhancing rigor became apparent during the review process. There are multiple methodological steps that can enhance the utility of exploratory scoping reviews. The potential of systematic consideration during the exploratory review process is shown as a viable method to enhance the rigor in reviewing diverse bodies of evidence.
Rigorous derivation of porous-media phase-field equations
NASA Astrophysics Data System (ADS)
Schmuck, Markus; Kalliadasis, Serafim
2017-11-01
The evolution of interfaces in Complex heterogeneous Multiphase Systems (CheMSs) plays a fundamental role in a wide range of scientific fields such as thermodynamic modelling of phase transitions, materials science, or as a computational tool for interfacial flow studies or material design. Here, we focus on phase-field equations in CheMSs such as porous media. To the best of our knowledge, we present the first rigorous derivation of error estimates for fourth order, upscaled, and nonlinear evolution equations. For CheMs with heterogeneity ɛ, we obtain the convergence rate ɛ 1 / 4 , which governs the error between the solution of the new upscaled formulation and the solution of the microscopic phase-field problem. This error behaviour has recently been validated computationally in. Due to the wide range of application of phase-field equations, we expect this upscaled formulation to allow for new modelling, analytic, and computational perspectives for interfacial transport and phase transformations in CheMSs. This work was supported by EPSRC, UK, through Grant Nos. EP/H034587/1, EP/L027186/1, EP/L025159/1, EP/L020564/1, EP/K008595/1, and EP/P011713/1 and from ERC via Advanced Grant No. 247031.
The Three-Component Defocusing Nonlinear Schrödinger Equation with Nonzero Boundary Conditions
NASA Astrophysics Data System (ADS)
Biondini, Gino; Kraus, Daniel K.; Prinari, Barbara
2016-12-01
We present a rigorous theory of the inverse scattering transform (IST) for the three-component defocusing nonlinear Schrödinger (NLS) equation with initial conditions approaching constant values with the same amplitude as {xto±∞}. The theory combines and extends to a problem with non-zero boundary conditions three fundamental ideas: (i) the tensor approach used by Beals, Deift and Tomei for the n-th order scattering problem, (ii) the triangular decompositions of the scattering matrix used by Novikov, Manakov, Pitaevski and Zakharov for the N-wave interaction equations, and (iii) a generalization of the cross product via the Hodge star duality, which, to the best of our knowledge, is used in the context of the IST for the first time in this work. The combination of the first two ideas allows us to rigorously obtain a fundamental set of analytic eigenfunctions. The third idea allows us to establish the symmetries of the eigenfunctions and scattering data. The results are used to characterize the discrete spectrum and to obtain exact soliton solutions, which describe generalizations of the so-called dark-bright solitons of the two-component NLS equation.
Davidson, R.; Fried, S.
1959-10-27
A method is described of preparing uraniurn hexafluoride without the use of fluorine gas by reacting uraniurn tetrafluoride with oxygen gas under rigorously anhydrous conditions at 600 to 1300 deg K within a pre-fluorinated nickel vessel.
Development of advanced methods for analysis of experimental data in diffusion
NASA Astrophysics Data System (ADS)
Jaques, Alonso V.
There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.
Assessing tiger population dynamics using photographic capture-recapture sampling
Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Hines, J.E.
2006-01-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, ?robust design? capture?recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of =K' =Y' 0.10 ? 0.069 (values are estimated mean ? SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 ? 0.051, and the estimated probability that a newly caught animal was a transient was = 0.18 ? 0.11. During the period when the sampled area was of constant size, the estimated population size Nt varied from 17 ? 1.7 to 31 ? 2.1 tigers, with a geometric mean rate of annual population change estimated as = 1.03 ? 0.020, representing a 3% annual increase. The estimated recruitment of new animals, Bt, varied from 0 ? 3.0 to 14 ? 2.9 tigers. Population density estimates, D, ranged from 7.33 ? 0.8 tigers/100 km2 to 21.73 ? 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
Assessing tiger population dynamics using photographic capture-recapture sampling.
Karanth, K Ullas; Nichols, James D; Kumar, N Samba; Hines, James E
2006-11-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, "robust design" capture-recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of gamma" = gamma' = 0.10 +/- 0.069 (values are estimated mean +/- SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 +/- 0.051, and the estimated probability that a newly caught animal was a transient was tau = 0.18 +/- 0.11. During the period when the sampled area was of constant size, the estimated population size N(t) varied from 17 +/- 1.7 to 31 +/- 2.1 tigers, with a geometric mean rate of annual population change estimated as lambda = 1.03 +/- 0.020, representing a 3% annual increase. The estimated recruitment of new animals, B(t), varied from 0 +/- 3.0 to 14 +/- 2.9 tigers. Population density estimates, D, ranged from 7.33 +/- 0.8 tigers/100 km2 to 21.73 +/- 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering
NASA Technical Reports Server (NTRS)
Bhatia, A. K.
2007-01-01
Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.
Augmented assessment as a means to augmented reality.
Bergeron, Bryan
2006-01-01
Rigorous scientific assessment of educational technologies typically lags behind the availability of the technologies by years because of the lack of validated instruments and benchmarks. Even when the appropriate assessment instruments are available, they may not be applied because of time and monetary constraints. Work in augmented reality, instrumented mannequins, serious gaming, and similar promising educational technologies that haven't undergone timely, rigorous evaluation, highlights the need for assessment methodologies that address the limitations of traditional approaches. The most promising augmented assessment solutions incorporate elements of rapid prototyping used in the software industry, simulation-based assessment techniques modeled after methods used in bioinformatics, and object-oriented analysis methods borrowed from object oriented programming.
Arriaza, Pablo; Nedjat-Haiem, Frances; Lee, Hee Yun; Martin, Shadi S
2015-01-01
The purpose of this article is to synthesize and chronicle the authors' experiences as four bilingual and bicultural researchers, each experienced in conducting cross-cultural/cross-language qualitative research. Through narrative descriptions of experiences with Latinos, Iranians, and Hmong refugees, the authors discuss their rewards, challenges, and methods of enhancing rigor, trustworthiness, and transparency when conducting cross-cultural/cross-language research. The authors discuss and explore how to effectively manage cross-cultural qualitative data, how to effectively use interpreters and translators, how to identify best methods of transcribing data, and the role of creating strong community relationships. The authors provide guidelines for health care professionals to consider when engaging in cross-cultural qualitative research.
A rigorous method was developed to maximize the extraction efficacy for perfluorocarboxylic acids (PFCAs), perfluorosulfonates (PFSAs), fluorotelomer alcohols (FTOHs), fluorotelomer acrylates (FTAc), perfluorosulfonamides (FOSAs), and perfluorosulfonamidoethanols (FOSEs) from was...
Kim, Dong-Kyu; Park, Won-Woong; Lee, Ho Won; Kang, Seong-Hoon; Im, Yong-Taek
2013-12-01
In this study, a rigorous methodology for quantifying recrystallization kinetics by electron backscatter diffraction is proposed in order to reduce errors associated with the operator's skill. An adaptive criterion to determine adjustable grain orientation spread depending on the recrystallization stage is proposed to better identify the recrystallized grains in the partially recrystallized microstructure. The proposed method was applied in characterizing the microstructure evolution during annealing of interstitial-free steel cold rolled to low and high true strain levels of 0.7 and 1.6, respectively. The recrystallization kinetics determined by the proposed method was found to be consistent with the standard method of Vickers microhardness. The application of the proposed method to the overall recrystallization stages showed that it can be used for the rigorous characterization of progressive microstructure evolution, especially for the severely deformed material. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Modeling, Modal Properties, and Mesh Stiffness Variation Instabilities of Planetary Gears
NASA Technical Reports Server (NTRS)
Parker, Robert G.; Lin, Jian; Krantz, Timothy L. (Technical Monitor)
2001-01-01
Planetary gear noise and vibration are primary concerns in their applications in helicopters, automobiles, aircraft engines, heavy machinery and marine vehicles. Dynamic analysis is essential to the noise and vibration reduction. This work analytically investigates some critical issues and advances the understanding of planetary gear dynamics. A lumped-parameter model is built for the dynamic analysis of general planetary gears. The unique properties of the natural frequency spectra and vibration modes are rigorously characterized. These special structures apply for general planetary gears with cyclic symmetry and, in practically important case, systems with diametrically opposed planets. The special vibration properties are useful for subsequent research. Taking advantage of the derived modal properties, the natural frequency and vibration mode sensitivities to design parameters are investigated. The key parameters include mesh stiffnesses, support/bearing stiffnesses, component masses, moments of inertia, and operating speed. The eigen-sensitivities are expressed in simple, closed-form formulae associated with modal strain and kinetic energies. As disorders (e.g., mesh stiffness variation. manufacturing and assembling errors) disturb the cyclic symmetry of planetary gears, their effects on the free vibration properties are quantitatively examined. Well-defined veering rules are derived to identify dramatic changes of natural frequencies and vibration modes under parameter variations. The knowledge of free vibration properties, eigen-sensitivities, and veering rules provide important information to effectively tune the natural frequencies and optimize structural design to minimize noise and vibration. Parametric instabilities excited by mesh stiffness variations are analytically studied for multi-mesh gear systems. The discrepancies of previous studies on parametric instability of two-stage gear chains are clarified using perturbation and numerical methods. The operating conditions causing parametric instabilities are expressed in closed-form suitable for design guidance. Using the well-defined modal properties of planetary gears, the effects of mesh parameters on parametric instability are analytically identified. Simple formulae are obtained to suppress particular instabilities by adjusting contact ratios and mesh phasing.
A Proof of Friedman's Ergosphere Instability for Scalar Waves
NASA Astrophysics Data System (ADS)
Moschidis, Georgios
2018-03-01
Let {(M^{3+1},g)} be a real analytic, stationary and asymptotically flat spacetime with a non-empty ergoregion E and no future event horizon H}^{+. In Friedman (Commun Math Phys 63(3):243-255, 1978), Friedman observed that, on such spacetimes, there exist solutions φ to the wave equation \\squaregφ=0 such that their local energy does not decay to 0 as time increases. In addition, Friedman provided a heuristic argument that the energy of such solutions actually grows to +∞. In this paper, we provide a rigorous proof of Friedman's instability. Our setting is, in fact, more general. We consider smooth spacetimes {(M^{d+1},g)}, for any {d≥2}, not necessarily globally real analytic. We impose only a unique continuation condition for the wave equation across the boundary partial{E} of E on a small neighborhood of a point p\\inpartialE. This condition always holds if {(M,g)} is analytic in that neighborhood of p, but it can also be inferred in the case when {(M,g)} possesses a second Killing field {Φ} such that the span of {Φ} and the stationary Killing field T is timelike on partial{E}. We also allow the spacetimes {(M,g)} under consideration to possess a (possibly empty) future event horizon H}^{+, such that, however, {H+\\cap E=\\emptyset} (excluding, thus, the Kerr exterior family). As an application of our theorem, we infer an instability result for the acoustical wave equation on the hydrodynamic vortex, a phenomenon first investigated numerically by Oliveira et al. in (Phys Rev D 89(12):124008, 2014). Furthermore, as a side benefit of our proof, we provide a derivation, based entirely on the vector field method, of a Carleman-type estimate on the exterior of the ergoregion for a general class of stationary and asymptotically flat spacetimes. Applications of this estimate include a Morawetz-type bound for solutions φ of \\squaregφ=0 with frequency support bounded away from {{ω}=0} and {{ω}=±∞}.
Re-establishment of rigor mortis: evidence for a considerably longer post-mortem time span.
Crostack, Chiara; Sehner, Susanne; Raupach, Tobias; Anders, Sven
2017-07-01
Re-establishment of rigor mortis following mechanical loosening is used as part of the complex method for the forensic estimation of the time since death in human bodies and has formerly been reported to occur up to 8-12 h post-mortem (hpm). We recently described our observation of the phenomenon in up to 19 hpm in cases with in-hospital death. Due to the case selection (preceding illness, immobilisation), transfer of these results to forensic cases might be limited. We therefore examined 67 out-of-hospital cases of sudden death with known time points of death. Re-establishment of rigor mortis was positive in 52.2% of cases and was observed up to 20 hpm. In contrast to the current doctrine that a recurrence of rigor mortis is always of a lesser degree than its first manifestation in a given patient, muscular rigidity at re-establishment equalled or even exceeded the degree observed before dissolving in 21 joints. Furthermore, this is the first study to describe that the phenomenon appears to be independent of body or ambient temperature.
Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.
Yuan, Lijun; Lu, Ya Yan
2013-05-20
Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.
2018-01-01
In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.
A National Residue Control Plan from the analytical perspective--the Brazilian case.
Mauricio, Angelo de Q; Lins, Erick S; Alvarenga, Marcelo B
2009-04-01
Food safety is a strategic topic entailing not only national public health aspects but also competitiveness in international trade. An important component of any food safety program is the control and monitoring of residues posed by certain substances involved in food production. In turn, a National Residue Control Plan (NRCP) relies on an appropriate laboratory network, not only to generate analytical results, but also more broadly to verify and co-validate the controls built along the food production chain. Therefore laboratories operating under a NRCP should work in close cooperation with inspection bodies, fostering the critical alignment of the whole system with the principles of risk analysis. Beyond producing technically valid results, these laboratories should arguably be able to assist in the prediction and establishment of targets for official control. In pursuit of analytical excellence, the Brazilian government has developed a strategic plan for Official Agricultural Laboratories. Inserted in a national agenda for agricultural risk analysis, the plan has succeeded in raising laboratory budget by approximately 200%, it has started a rigorous program for personnel capacity-building, it has initiated strategic cooperation with international reference centres, and finally, it has completely renewed instrumental resources and rapidly triggered a program aimed at full laboratory compliance with ISO/IEC 17025 requirements.
NASA Astrophysics Data System (ADS)
Cocco, Alex P.; Nakajo, Arata; Chiu, Wilson K. S.
2017-12-01
We present a fully analytical, heuristic model - the "Analytical Transport Network Model" - for steady-state, diffusive, potential flow through a 3-D network. Employing a combination of graph theory, linear algebra, and geometry, the model explicitly relates a microstructural network's topology and the morphology of its channels to an effective material transport coefficient (a general term meant to encompass, e.g., conductivity or diffusion coefficient). The model's transport coefficient predictions agree well with those from electrochemical fin (ECF) theory and finite element analysis (FEA), but are computed 0.5-1.5 and 5-6 orders of magnitude faster, respectively. In addition, the theory explicitly relates a number of morphological and topological parameters directly to the transport coefficient, whereby the distributions that characterize the structure are readily available for further analysis. Furthermore, ATN's explicit development provides insight into the nature of the tortuosity factor and offers the potential to apply theory from network science and to consider the optimization of a network's effective resistance in a mathematically rigorous manner. The ATN model's speed and relative ease-of-use offer the potential to aid in accelerating the design (with respect to transport), and thus reducing the cost, of energy materials.
Miller, Brian W.; Morisette, Jeffrey T.
2014-01-01
Developing resource management strategies in the face of climate change is complicated by the considerable uncertainty associated with projections of climate and its impacts and by the complex interactions between social and ecological variables. The broad, interconnected nature of this challenge has resulted in calls for analytical frameworks that integrate research tools and can support natural resource management decision making in the face of uncertainty and complex interactions. We respond to this call by first reviewing three methods that have proven useful for climate change research, but whose application and development have been largely isolated: species distribution modeling, scenario planning, and simulation modeling. Species distribution models provide data-driven estimates of the future distributions of species of interest, but they face several limitations and their output alone is not sufficient to guide complex decisions for how best to manage resources given social and economic considerations along with dynamic and uncertain future conditions. Researchers and managers are increasingly exploring potential futures of social-ecological systems through scenario planning, but this process often lacks quantitative response modeling and validation procedures. Simulation models are well placed to provide added rigor to scenario planning because of their ability to reproduce complex system dynamics, but the scenarios and management options explored in simulations are often not developed by stakeholders, and there is not a clear consensus on how to include climate model outputs. We see these strengths and weaknesses as complementarities and offer an analytical framework for integrating these three tools. We then describe the ways in which this framework can help shift climate change research from useful to usable.
The right-hand side of the Jacobi identity: to be naught or not to be ?
NASA Astrophysics Data System (ADS)
Kiselev, Arthemy V.
2016-01-01
The geometric approach to iterated variations of local functionals -e.g., of the (master-)action functional - resulted in an extension of the deformation quantisation technique to the set-up of Poisson models of field theory. It also allowed of a rigorous proof for the main inter-relations between the Batalin-Vilkovisky (BV) Laplacian Δ and variational Schouten bracket [,]. The ad hoc use of these relations had been a known analytic difficulty in the BV- formalism for quantisation of gauge systems; now achieved, the proof does actually not require the assumption of graded-commutativity. Explained in our previous work, geometry's self- regularisation is rendered by Gel'fand's calculus of singular linear integral operators supported on the diagonal. We now illustrate that analytic technique by inspecting the validity mechanism for the graded Jacobi identity which the variational Schouten bracket does satisfy (whence Δ2 = 0, i.e., the BV-Laplacian is a differential acting in the algebra of local functionals). By using one tuple of three variational multi-vectors twice, we contrast the new logic of iterated variations - when the right-hand side of Jacobi's identity vanishes altogether - with the old method: interlacing its steps and stops, it could produce some non-zero representative of the trivial class in the top- degree horizontal cohomology. But we then show at once by an elementary counterexample why, in the frames of the old approach that did not rely on Gel'fand's calculus, the BV-Laplacian failed to be a graded derivation of the variational Schouten bracket.
Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.
2016-01-01
Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567
Student Preferences for Instructional Methods in an Accounting Curriculum
ERIC Educational Resources Information Center
Abeysekera, Indra
2015-01-01
Student preferences among instructional methods are largely unexplored across the accounting curriculum. The algorithmic rigor of courses and the societal culture can influence these preferences. This study explored students' preferences of instructional methods for learning in six courses of the accounting curriculum that differ in algorithmic…
Walach, Harald; Falkenberg, Torkel; Fønnebø, Vinjar; Lewith, George; Jonas, Wayne B
2006-01-01
Background The reasoning behind evaluating medical interventions is that a hierarchy of methods exists which successively produce improved and therefore more rigorous evidence based medicine upon which to make clinical decisions. At the foundation of this hierarchy are case studies, retrospective and prospective case series, followed by cohort studies with historical and concomitant non-randomized controls. Open-label randomized controlled studies (RCTs), and finally blinded, placebo-controlled RCTs, which offer most internal validity are considered the most reliable evidence. Rigorous RCTs remove bias. Evidence from RCTs forms the basis of meta-analyses and systematic reviews. This hierarchy, founded on a pharmacological model of therapy, is generalized to other interventions which may be complex and non-pharmacological (healing, acupuncture and surgery). Discussion The hierarchical model is valid for limited questions of efficacy, for instance for regulatory purposes and newly devised products and pharmacological preparations. It is inadequate for the evaluation of complex interventions such as physiotherapy, surgery and complementary and alternative medicine (CAM). This has to do with the essential tension between internal validity (rigor and the removal of bias) and external validity (generalizability). Summary Instead of an Evidence Hierarchy, we propose a Circular Model. This would imply a multiplicity of methods, using different designs, counterbalancing their individual strengths and weaknesses to arrive at pragmatic but equally rigorous evidence which would provide significant assistance in clinical and health systems innovation. Such evidence would better inform national health care technology assessment agencies and promote evidence based health reform. PMID:16796762
NASA Astrophysics Data System (ADS)
Šprlák, M.; Han, S.-C.; Featherstone, W. E.
2017-12-01
Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.
Gordon, M. J. C.
2015-01-01
Robin Milner's paper, ‘The use of machines to assist in rigorous proof’, introduces methods for automating mathematical reasoning that are a milestone in the development of computer-assisted theorem proving. His ideas, particularly his theory of tactics, revolutionized the architecture of proof assistants. His methodology for automating rigorous proof soundly, particularly his theory of type polymorphism in programing, led to major contributions to the theory and design of programing languages. His citation for the 1991 ACM A.M. Turing award, the most prestigious award in computer science, credits him with, among other achievements, ‘probably the first theoretically based yet practical tool for machine assisted proof construction’. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. PMID:25750147
Health Promotion in Small Business
McCoy, Kira; Stinson, Kaylan; Scott, Kenneth; Tenney, Liliana; Newman, Lee S.
2015-01-01
Objective To assess the evidence regarding the adoption and efficacy of worksite health promotion programs (WHPPs) in small businesses. Methods Peer-reviewed research articles were identified from a database search. Included articles were published before July 2013, described a study that used an experimental or quasiexperimental design and either assessed adoption of WHPPs or conducted interventions in businesses with fewer than 500 employees. A review team scored the study’s rigor using the WHO-adapted GRADEprofiler “quality of evidence” criteria. Results Of the 84 retrieved articles, 19 met study inclusion criteria. Of these, only two met criteria for high rigor. Conclusions Fewer small businesses adopt WHPPs compared with large businesses. Two high-rigor studies found that employees were healthier postintervention. Higher quality research is needed to better understand why small businesses rarely adopt wellness programs and to demonstrate the value of such programs. PMID:24905421
Why Open-Ended Survey Questions Are Unlikely to Support Rigorous Qualitative Insights.
LaDonna, Kori A; Taylor, Taryn; Lingard, Lorelei
2018-03-01
Health professions education researchers are increasingly relying on a combination of quantitative and qualitative research methods to explore complex questions in the field. This important and necessary development, however, creates new methodological challenges that can affect both the rigor of the research process and the quality of the findings. One example is "qualitatively" analyzing free-text responses to survey or assessment instrument questions. In this Invited Commentary, the authors explain why analysis of such responses rarely meets the bar for rigorous qualitative research. While the authors do not discount the potential for free-text responses to enhance quantitative findings or to inspire new research questions, they caution that these responses rarely produce data rich enough to generate robust, stand-alone insights. The authors consider exemplars from health professions education research and propose strategies for treating free-text responses appropriately.
Acosta, Joie D; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S
2016-01-01
Restorative Practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this paper describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI) in 14 middle schools in Maine to assess whether RPI impacts both positive developmental outcomes and problem behaviors and whether the effects persist during the transition from middle to high school. The two-year RPI intervention began in the 2014-2015 school year. The study's rationale and theoretical concerns are discussed along with methodological concerns including teacher professional development. The theoretical rationale and description of the methods from this study may be useful to others conducting rigorous research and evaluation in this area.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
SAM Radiochemical Methods Query
Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-23
... on rigorous, scientifically based research methods to assess the effectiveness of a particular... and programs; and (B) Includes research that-- (i) Employs systematic, empirical methods that draw on... hypotheses and justify the general conclusions drawn; (iii) Relies on measurements or observational methods...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-22
... on rigorous scientifically based research methods to assess the effectiveness of a particular... activities and programs; and (B) Includes research that-- (i) Employs systematic, empirical methods that draw... or observational methods that provide reliable and valid data across evaluators and observers, across...
Analysis of Perfluorinated Chemicals in Sludge: Method Development and Initial Results
A fast, rigorous method was developed to maximize the extraction efficacy for ten perfluorocarboxylic acids and perfluorooctanesulfonate from wastewater-treatment sludge and to quantitate using liquid chromatography, tandem-mass spectrometry (LC/MS/MS). First, organic solvents w...
Physical Analytics: An emerging field with real-world applications and impact
NASA Astrophysics Data System (ADS)
Hamann, Hendrik
2015-03-01
In the past most information on the internet has been originated by humans or computers. However with the emergence of cyber-physical systems, vast amount of data is now being created by sensors from devices, machines etc digitizing the physical world. While cyber-physical systems are subject to active research around the world, the vast amount of actual data generated from the physical world has attracted so far little attention from the engineering and physics community. In this presentation we use examples to highlight the opportunities in this new subject of ``Physical Analytics'' for highly inter-disciplinary research (including physics, engineering and computer science), which aims understanding real-world physical systems by leveraging cyber-physical technologies. More specifically, the convergence of the physical world with the digital domain allows applying physical principles to everyday problems in a much more effective and informed way than what was possible in the past. Very much like traditional applied physics and engineering has made enormous advances and changed our lives by making detailed measurements to understand the physics of an engineered device, we can now apply the same rigor and principles to understand large-scale physical systems. In the talk we first present a set of ``configurable'' enabling technologies for Physical Analytics including ultralow power sensing and communication technologies, physical big data management technologies, numerical modeling for physical systems, machine learning based physical model blending, and physical analytics based automation and control. Then we discuss in detail several concrete applications of Physical Analytics ranging from energy management in buildings and data centers, environmental sensing and controls, precision agriculture to renewable energy forecasting and management.
Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron
NASA Astrophysics Data System (ADS)
Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.
2010-07-01
Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...
Power law expansion of the early universe for a V (a) = kan potential
NASA Astrophysics Data System (ADS)
Freitas, Augusto S.
2018-01-01
In a recent paper, He, Gao and Cai [Phys. Rev. D 89, 083510 (2014)], found a rigorous proof, based on analytical solutions of the Wheeler-DeWitt (WDWE) equation, of the spontaneous creation of the universe from nothing. The solutions were obtained from a classical potential V = ka2, where a is the scale factor. In this paper, we present a complementary (to that of He, Gao and Cai) solution to the WDWE equation with V = kan. I have found an exponential expansion of the true vacuum bubble for all scenarios. In all scenarios, we found a power law behavior of the scale factor result which is in agreement with another studies.
NASA Technical Reports Server (NTRS)
Payne, M. H.
1973-01-01
The bounds for the normalized associated Legendre functions P sub nm were studied to provide a rational basis for the truncation of the geopotential series in spherical harmonics in various orbital analyses. The conjecture is made that the largest maximum of the normalized associated Legendre function lies in the interval which indicates the greatest integer function. A procedure is developed for verifying this conjecture. An on-line algebraic manipulator, IAM, is used to implement the procedure and the verification is carried out for all n equal to or less than 2m, for m = 1 through 6. A rigorous proof of the conjecture is not available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, Mark R.; Hayden, Nancy Kay; Backus, George A.
Most national policy decisions are complex with a variety of stakeholders, disparate interests and the potential for unintended consequences. While a number of analytical tools exist to help decision makers sort through the mountains of data and myriad of options, decision support teams are increasingly turning to complexity science for improved analysis and better insight into the potential impact of policy decisions. While complexity science has great potential, it has only proven useful in limited case s and when properly applied. In advance of more widespread use, a national - level effort to refine complexity science and more rigorously establishmore » its technical underpinnings is recommended.« less
VII. The history of physical activity and academic performance research: informing the future.
Castelli, Darla M; Centeio, Erin E; Hwang, Jungyun; Barcelona, Jeanne M; Glowacki, Elizabeth M; Calvert, Hannah G; Nicksic, Hildi M
2014-12-01
The study of physical activity, physical fitness, and academic performance research are reviewed from a historical perspective, by providing an overview of existing publications focused on children and adolescents. Using rigorous inclusion criteria, the studies were quantified and qualified using both meta-analytic and descriptive evaluations analyses, first by time-period and then as an overall summary, particularly focusing on secular trends and future directions. This review is timely because the body of literature is growing exponentially, resulting in the emergence of new terminology, methodologies, and identification of mediating and moderating factors. Implications and recommendations for future research are summarized. © 2014 The Society for Research in Child Development, Inc.
An innovative portfolio of research training programs for medical students.
Zier, Karen; Wyatt, Christina; Muller, David
2012-12-01
Medical student education continues to evolve, with an increasing emphasis on evidence-based decision making in clinical settings. Many schools are introducing scholarly programs to their curriculum in order to foster the development of critical thinking and analytic skills, encourage self-directed learning, and develop more individualized learning experiences. In addition, participation in rigorous scholarly projects teaches students that clinical care and research should inform each other, with the goal of providing more benefit to patients and society. Physician-scientists, and physicians who have a better appreciation of science, have the potential to be leaders in the field who will deliver outstanding clinical care, contribute new knowledge, and educate their patients.
Image synthesis for SAR system, calibration and processor design
NASA Technical Reports Server (NTRS)
Holtzman, J. C.; Abbott, J. L.; Kaupp, V. H.; Frost, V. S.
1978-01-01
The Point Scattering Method of simulating radar imagery rigorously models all aspects of the imaging radar phenomena. Its computational algorithms operate on a symbolic representation of the terrain test site to calculate such parameters as range, angle of incidence, resolution cell size, etc. Empirical backscatter data and elevation data are utilized to model the terrain. Additionally, the important geometrical/propagation effects such as shadow, foreshortening, layover, and local angle of incidence are rigorously treated. Applications of radar image simulation to a proposed calibrated SAR system are highlighted: soil moisture detection and vegetation discrimination.
Beaton, Dorcas E; Clark, Jocalyn P
2009-05-01
Qualitative research is a useful approach to explore perplexing or complicated clinical situations. Since 1996, at least fifteen qualitative studies in the area of total knee replacement alone were found. Qualitative studies overcome the limits of quantitative work because they can explicate deeper meaning and complexity associated with questions such as why patients decline joint replacement surgery, why they do not adhere to pain medication and exercise regimens, how they manage in the postoperative period, and why providers do not always provide evidence-based care. In this paper, we review the role of qualitative methods in orthopaedic research, using knee osteoarthritis as an illustrative example. Qualitative research questions tend to be inductive, and the stance of the investigator is relevant and explicitly acknowledged. Qualitative methodologies include grounded theory, phenomenology, and ethnography and involve gathering opinions and text from individuals or focus groups. The methods are rigorous and take training and time to apply. Analysis of the textual data typically proceeds with the identification, coding, and categorization of patterns in the data for the purpose of generating concepts from within the data. With use of analytic techniques, researchers strive to explain the findings; questions are asked to tease out different levels of meaning, identify new concepts and themes, and permit a deeper interpretation and understanding. Orthopaedic practitioners should consider the use of qualitative research as a tool for exploring the meaning and complexities behind some of the perplexing phenomena that they observe in research findings and clinical practice.
Recommendations for the Design and Analysis of Treatment Trials for Alcohol Use Disorders
Witkiewitz, Katie; Finney, John W.; Harris, Alex H.S; Kivlahan, Daniel R.; Kranzler, Henry R.
2015-01-01
Background Over the past 60 years the view that “alcoholism” is a disease for which the only acceptable goal of treatment is abstinence has given way to the recognition that alcohol use disorders (AUDs) occur on a continuum of severity, for which a variety of treatment options are appropriate. However, because the available treatments for AUDs are not effective for everyone, more research is needed to develop novel and more efficacious treatments to address the range of AUD severity in diverse populations. Here we offer recommendations for the design and analysis of alcohol treatment trials, with a specific focus on the careful conduct of randomized clinical trials of medications and non-pharmacological interventions for AUDs. Methods Narrative review of the quality of published clinical trials and recommendations for the optimal design and analysis of treatment trials for AUDs. Results Despite considerable improvements in the design of alcohol clinical trials over the past two decades, many studies of AUD treatments have used faulty design features and statistical methods that are known to produce biased estimates of treatment efficacy. Conclusions The published statistical and methodological literatures provide clear guidance on methods to improve clinical trial design and analysis. Consistent use of state-of-the-art design features and analytic approaches will enhance the internal and external validity of treatment trials for AUDs across the spectrum of severity. The ultimate result of this attention to methodological rigor is that better treatment options will be identified for patients with an AUD. PMID:26250333
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...
Kline, Joshua C.
2014-01-01
Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. PMID:25210152
Determination of ten perfluorinated compounds in bluegill sunfish (Lepomis macrochirus) fillets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delinsky, Amy D.; Strynar, Mark J.; Nakayama, Shoji F.
2009-11-15
A rigorous solid phase extraction/liquid chromatography/tandem mass spectrometry method for the measurement of 10 perfluorinated compounds (PFCs) in fish fillets is described and applied to fillets of bluegill sunfish (Lepomis macrochirus) collected from selected areas of Minnesota and North Carolina. The 4 PFC analytes routinely detected in bluegill fillets were perfluorooctane sulfonate (PFOS), perfluorodecanoic acid (C10), perfluoroundecanoic acid (C11), and perflurododecanoic acid (C12). Measures of method accuracy and precision for these compounds showed that calculated concentrations of PFCs in spiked samples differed by less than 20% from their theoretical values and that the %RSD for repeated measurements was less thanmore » 20%. Minnesota samples were collected from areas of the Mississippi River near historical PFC sources, from the St. Croix River as a background site, and from Lake Calhoun, which has no documented PFC sources. PFOS was the most prevalent PFC found in the Minnesota samples, with median concentrations of 47.0-102 ng/g at locations along the Mississippi River, 2.08 ng/g in the St. Croix River, and 275 ng/g in Lake Calhoun. North Carolina samples were collected from two rivers with no known historical PFC sources. PFOS was the predominant analyte in fish taken from the Haw and Deep Rivers, with median concentrations of 30.3 and 62.2 ng/g, respectively. Concentrations of C10, C11, and C12 in NC samples were among the highest reported in the literature, with respective median values of 9.08, 23.9, and 6.60 ng/g in fish from the Haw River and 2.90, 9.15, and 3.46 ng/g in fish from the Deep River. These results suggest that PFC contamination in freshwater fish may not be limited to areas with known historical PFC inputs.« less
Pitkänen, Leena; Montoro Bustos, Antonio R; Murphy, Karen E; Winchester, Michael R; Striegel, André M
2017-08-18
The physicochemical characterization of nanoparticles (NPs) is of paramount importance for tailoring and optimizing the properties of these materials as well as for evaluating the environmental fate and impact of the NPs. Characterizing the size and chemical identity of disperse NP sample populations can be accomplished by coupling size-based separation methods to physical and chemical detection methods. Informed decisions regarding the NPs can only be made, however, if the separations themselves are quantitative, i.e., if all or most of the analyte elutes from the column within the course of the experiment. We undertake here the size-exclusion chromatographic characterization of Au NPs spanning a six-fold range in mean size. The main problem which has plagued the size-exclusion chromatography (SEC) analysis of Au NPs, namely lack of quantitation accountability due to generally poor NP recovery from the columns, is overcome by carefully matching eluent formulation with the appropriate stationary phase chemistry, and by the use of on-line inductively coupled plasma mass spectrometry (ICP-MS) detection. Here, for the first time, we demonstrate the quantitative analysis of Au NPs by SEC/ICP-MS, including the analysis of a ternary NP blend. The SEC separations are contrasted to HDC/ICP-MS (HDC: hydrodynamic chromatography) separations employing the same stationary phase chemistry. Additionally, analysis of Au NPs by HDC with on-line quasi-elastic light scattering (QELS) allowed for continuous determination of NP size across the chromatographic profiles, circumventing issues related to the shedding of fines from the SEC columns. The use of chemically homogeneous reference materials with well-defined size range allowed for better assessment of the accuracy and precision of the analyses, and for a more direct interpretation of results, than would be possible employing less rigorously characterized analytes. Published by Elsevier B.V.
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
Development of rigor mortis is not affected by muscle volume.
Kobayashi, M; Ikegaya, H; Takase, I; Hatanaka, K; Sakurada, K; Iwase, H
2001-04-01
There is a hypothesis suggesting that rigor mortis progresses more rapidly in small muscles than in large muscles. We measured rigor mortis as tension determined isometrically in rat musculus erector spinae that had been cut into muscle bundles of various volumes. The muscle volume did not influence either the progress or the resolution of rigor mortis, which contradicts the hypothesis. Differences in pre-rigor load on the muscles influenced the onset and resolution of rigor mortis in a few pairs of samples, but did not influence the time taken for rigor mortis to reach its full extent after death. Moreover, the progress of rigor mortis in this muscle was biphasic; this may reflect the early rigor of red muscle fibres and the late rigor of white muscle fibres.
Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I
2015-01-01
High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®
Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang
2014-08-01
We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.
Rigorous Combination of GNSS and VLBI: How it Improves Earth Orientation and Reference Frames
NASA Astrophysics Data System (ADS)
Lambert, S. B.; Richard, J. Y.; Bizouard, C.; Becker, O.
2017-12-01
Current reference series (C04) of the International Earth Rotation and Reference Systems Service (IERS) are produced by a weighted combination of Earth orientation parameters (EOP) time series built up by combination centers of each technique (VLBI, GNSS, Laser ranging, DORIS). In the future, we plan to derive EOP from a rigorous combination of the normal equation systems of the four techniques.We present here the results of a rigorous combination of VLBI and GNSS pre-reduced, constraint-free, normal equations with the DYNAMO geodetic analysis software package developed and maintained by the French GRGS (Groupe de Recherche en GeÌodeÌsie Spatiale). The used normal equations are those produced separately by the IVS and IGS combination centers to which we apply our own minimal constraints.We address the usefulness of such a method with respect to the classical, a posteriori, combination method, and we show whether EOP determinations are improved.Especially, we implement external validations of the EOP series based on comparison with geophysical excitation and examination of the covariance matrices. Finally, we address the potential of the technique for the next generation celestial reference frames, which are currently determined by VLBI only.
Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.
2015-01-01
In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416
Spencer-Bonilla, Gabriela; Singh Ospina, Naykky; Rodriguez-Gutierrez, Rene; Brito, Juan P; Iñiguez-Ariza, Nicole; Tamhane, Shrikant; Erwin, Patricia J; Murad, M Hassan; Montori, Victor M
2017-07-01
Systematic reviews provide clinicians and policymakers estimates of diagnostic test accuracy and their usefulness in clinical practice. We identified all available systematic reviews of diagnosis in endocrinology, summarized the diagnostic accuracy of the tests included, and assessed the credibility and clinical usefulness of the methods and reporting. We searched Ovid MEDLINE, EMBASE, and Cochrane CENTRAL from inception to December 2015 for systematic reviews and meta-analyses reporting accuracy measures of diagnostic tests in endocrinology. Experienced reviewers independently screened for eligible studies and collected data. We summarized the results, methods, and reporting of the reviews. We performed subgroup analyses to categorize diagnostic tests as most useful based on their accuracy. We identified 84 systematic reviews; half of the tests included were classified as helpful when positive, one-fourth as helpful when negative. Most authors adequately reported how studies were identified and selected and how their trustworthiness (risk of bias) was judged. Only one in three reviews, however, reported an overall judgment about trustworthiness and one in five reported using adequate meta-analytic methods. One in four reported contacting authors for further information and about half included only patients with diagnostic uncertainty. Up to half of the diagnostic endocrine tests in which the likelihood ratio was calculated or provided are likely to be helpful in practice when positive as are one-quarter when negative. Most diagnostic systematic reviews in endocrine lack methodological rigor, protection against bias, and offer limited credibility. Substantial efforts, therefore, seem necessary to improve the quality of diagnostic systematic reviews in endocrinology.
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
Acosta, Joie D.; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S.
2017-01-01
Restorative Practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this paper describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI) in 14 middle schools in Maine to assess whether RPI impacts both positive developmental outcomes and problem behaviors and whether the effects persist during the transition from middle to high school. The two-year RPI intervention began in the 2014–2015 school year. The study’s rationale and theoretical concerns are discussed along with methodological concerns including teacher professional development. The theoretical rationale and description of the methods from this study may be useful to others conducting rigorous research and evaluation in this area. PMID:28936104
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Manuel, Sharrón L; Johnson, Brian W; Frevert, Charles W; Duncan, Francesca E
2018-04-21
Immunohistochemistry (IHC) is a robust scientific tool whereby cellular components are visualized within a tissue, and this method has been and continues to be a mainstay for many reproductive biologists. IHC is highly informative if performed and interpreted correctly, but studies have shown that the general use and reporting of appropriate controls in IHC experiments is low. This omission of the scientific method can result in data that lacks rigor and reproducibility. In this editorial, we highlight key concepts in IHC controls and describe an opportunity for our field to partner with the Histochemical Society to adopt their IHC guidelines broadly as researchers, authors, ad hoc reviewers, editorial board members, and editors-in-chief. Such cross-professional society interactions will ensure that we produce the highest quality data as new technologies emerge that still rely upon the foundations of classic histological and immunohistochemical principles.
Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS
NASA Astrophysics Data System (ADS)
Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.
2017-04-01
To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.