Sample records for simple analytic estimates

  1. A simple method for estimating frequency response corrections for eddy covariance systems

    Treesearch

    W. J. Massman

    2000-01-01

    A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...

  2. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  3. A simple, analytical, axisymmetric microburst model for downdraft estimation

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.

    1991-01-01

    A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.

  4. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    NASA Technical Reports Server (NTRS)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  5. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    PubMed

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Keeping It Simple: Can We Estimate Malting Quality Potential Using an Isothermal Mashing Protocol and Common Laboratory Instrumentation?

    USDA-ARS?s Scientific Manuscript database

    Current methods for generating malting quality metrics have been developed largely to support commercial malting and brewing operations, providing accurate, reproducible analytical data to guide malting and brewing production. Infrastructure to support these analytical operations often involves sub...

  7. A SIMPLE, EFFICIENT SOLUTION OF FLUX-PROFILE RELATIONSHIPS IN THE ATMOSPHERIC SURFACE LAYER

    EPA Science Inventory

    This note describes a simple scheme for analytical estimation of the surface layer similarity functions from state variables. What distinguishes this note from the many previous papers on this topic is that this method is specifically targeted for numerical models where simplici...

  8. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  9. ESTIMATION OF GROUNDWATER POLLUTION POTENTIAL BY PESTICIDES IN MID-ATLANTIC COASTAL PLAIN WATERSHEDS

    EPA Science Inventory

    A simple GIS-based transport model to estimate the potential for groundwater pollution by pesticides has been developed within the ArcView GIS environment. The pesticide leaching analytical model, which is based on one-dimensional advective-dispersive-reactive (ADR) transport, ha...

  10. Qualitative methods in quantum theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migdal, A.B.

    The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less

  11. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  12. Preliminary Upper Estimate of Peak Currents in Transcranial Magnetic Stimulation at Distant Locations from a TMS Coil

    PubMed Central

    Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro

    2016-01-01

    Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221

  13. Nonlinear estimation for arrays of chemical sensors

    NASA Astrophysics Data System (ADS)

    Yosinski, Jason; Paffenroth, Randy

    2010-04-01

    Reliable detection of hazardous materials is a fundamental requirement of any national security program. Such materials can take a wide range of forms including metals, radioisotopes, volatile organic compounds, and biological contaminants. In particular, detection of hazardous materials in highly challenging conditions - such as in cluttered ambient environments, where complex collections of analytes are present, and with sensors lacking specificity for the analytes of interest - is an important part of a robust security infrastructure. Sophisticated single sensor systems provide good specificity for a limited set of analytes but often have cumbersome hardware and environmental requirements. On the other hand, simple, broadly responsive sensors are easily fabricated and efficiently deployed, but such sensors individually have neither the specificity nor the selectivity to address analyte differentiation in challenging environments. However, arrays of broadly responsive sensors can provide much of the sensitivity and selectivity of sophisticated sensors but without the substantial hardware overhead. Unfortunately, arrays of simple sensors are not without their challenges - the selectivity of such arrays can only be realized if the data is first distilled using highly advanced signal processing algorithms. In this paper we will demonstrate how the use of powerful estimation algorithms, based on those commonly used within the target tracking community, can be extended to the chemical detection arena. Herein our focus is on algorithms that not only provide accurate estimates of the mixture of analytes in a sample, but also provide robust measures of ambiguity, such as covariances.

  14. Estimating hydraulic properties from tidal attenuation in the Northern Guam Lens Aquifer, territory of Guam, USA

    USGS Publications Warehouse

    Rotzoll, Kolja; Gingerich, Stephen B.; Jenson, John W.; El-Kadi, Aly I.

    2013-01-01

    Tidal-signal attenuations are analyzed to compute hydraulic diffusivities and estimate regional hydraulic conductivities of the Northern Guam Lens Aquifer, Territory of Guam (Pacific Ocean), USA. The results indicate a significant tidal-damping effect at the coastal boundary. Hydraulic diffusivities computed using a simple analytical solution for well responses to tidal forcings near the periphery of the island are two orders of magnitude lower than for wells in the island’s interior. Based on assigned specific yields of ~0.01–0.4, estimated hydraulic conductivities are ~20–800 m/day for peripheral wells, and ~2,000–90,000 m/day for interior wells. The lower conductivity of the peripheral rocks relative to the interior rocks may best be explained by the effects of karst evolution: (1) dissolutional enhancement of horizontal hydraulic conductivity in the interior; (2) case-hardening and concurrent reduction of local hydraulic conductivity in the cliffs and steeply inclined rocks of the periphery; and (3) the stronger influence of higher-conductivity regional-scale features in the interior relative to the periphery. A simple numerical model calibrated with measured water levels and tidal response estimates values for hydraulic conductivity and storage parameters consistent with the analytical solution. The study demonstrates how simple techniques can be useful for characterizing regional aquifer properties.

  15. Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, T.E.

    1996-01-01

    The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less

  16. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  17. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE PAGES

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol; ...

    2017-07-10

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  18. Experimental and analytical studies for the NASA carbon fiber risk assessment

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Various experimental and analytical studies performed for the NASA carbon fiber risk assessment program are described with emphasis on carbon fiber characteristics, sensitivity of electrical equipment and components to shorting or arcing by carbon fibers, attenuation effect of carbon fibers on aircraft landing aids, impact of carbon fibers on industrial facilities. A simple method of estimating damage from airborne carbon fibers is presented.

  19. Ultimate Longitudinal Strength of Composite Ship Hulls

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen

    2017-01-01

    A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.

  20. Empirical testing of an analytical model predicting electrical isolation of photovoltaic models

    NASA Astrophysics Data System (ADS)

    Garcia, A., III; Minning, C. P.; Cuddihy, E. F.

    A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.

  1. Branch length estimation and divergence dating: estimates of error in Bayesian and maximum likelihood frameworks.

    PubMed

    Schwartz, Rachel S; Mueller, Rachel L

    2010-01-11

    Estimates of divergence dates between species improve our understanding of processes ranging from nucleotide substitution to speciation. Such estimates are frequently based on molecular genetic differences between species; therefore, they rely on accurate estimates of the number of such differences (i.e. substitutions per site, measured as branch length on phylogenies). We used simulations to determine the effects of dataset size, branch length heterogeneity, branch depth, and analytical framework on branch length estimation across a range of branch lengths. We then reanalyzed an empirical dataset for plethodontid salamanders to determine how inaccurate branch length estimation can affect estimates of divergence dates. The accuracy of branch length estimation varied with branch length, dataset size (both number of taxa and sites), branch length heterogeneity, branch depth, dataset complexity, and analytical framework. For simple phylogenies analyzed in a Bayesian framework, branches were increasingly underestimated as branch length increased; in a maximum likelihood framework, longer branch lengths were somewhat overestimated. Longer datasets improved estimates in both frameworks; however, when the number of taxa was increased, estimation accuracy for deeper branches was less than for tip branches. Increasing the complexity of the dataset produced more misestimated branches in a Bayesian framework; however, in an ML framework, more branches were estimated more accurately. Using ML branch length estimates to re-estimate plethodontid salamander divergence dates generally resulted in an increase in the estimated age of older nodes and a decrease in the estimated age of younger nodes. Branch lengths are misestimated in both statistical frameworks for simulations of simple datasets. However, for complex datasets, length estimates are quite accurate in ML (even for short datasets), whereas few branches are estimated accurately in a Bayesian framework. Our reanalysis of empirical data demonstrates the magnitude of effects of Bayesian branch length misestimation on divergence date estimates. Because the length of branches for empirical datasets can be estimated most reliably in an ML framework when branches are <1 substitution/site and datasets are > or =1 kb, we suggest that divergence date estimates using datasets, branch lengths, and/or analytical techniques that fall outside of these parameters should be interpreted with caution.

  2. Heat as a groundwater tracer in shallow and deep heterogeneous media: Analytical solution, spreadsheet tool, and field applications

    USGS Publications Warehouse

    Kurylyk, Barret L.; Irvine, Dylan J.; Carey, Sean K.; Briggs, Martin A.; Werkema, Dale D.; Bonham, Mariah

    2017-01-01

    Groundwater flow advects heat, and thus, the deviation of subsurface temperatures from an expected conduction‐dominated regime can be analysed to estimate vertical water fluxes. A number of analytical approaches have been proposed for using heat as a groundwater tracer, and these have typically assumed a homogeneous medium. However, heterogeneous thermal properties are ubiquitous in subsurface environments, both at the scale of geologic strata and at finer scales in streambeds. Herein, we apply the analytical solution of Shan and Bodvarsson (2004), developed for estimating vertical water fluxes in layered systems, in 2 new environments distinct from previous vadose zone applications. The utility of the solution for studying groundwater‐surface water exchange is demonstrated using temperature data collected from an upwelling streambed with sediment layers, and a simple sensitivity analysis using these data indicates the solution is relatively robust. Also, a deeper temperature profile recorded in a borehole in South Australia is analysed to estimate deeper water fluxes. The analytical solution is able to match observed thermal gradients, including the change in slope at sediment interfaces. Results indicate that not accounting for layering can yield errors in the magnitude and even direction of the inferred Darcy fluxes. A simple automated spreadsheet tool (Flux‐LM) is presented to allow users to input temperature and layer data and solve the inverse problem to estimate groundwater flux rates from shallow (e.g., <1 m) or deep (e.g., up to 100 m) profiles. The solution is not transient, and thus, it should be cautiously applied where diel signals propagate or in deeper zones where multi‐decadal surface signals have disturbed subsurface thermal regimes.

  3. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part II

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.

  4. Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry

    PubMed Central

    Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui

    2014-01-01

    Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355

  5. Targeted analyte detection by standard addition improves detection limits in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui

    2012-09-18

    Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chhiber, R; Usmanov, AV; Matthaeus, WH

    Simple estimates of the number of Coulomb collisions experienced by the interplanetary plasma to the point of observation, i.e., the “collisional age”, can be usefully employed in the study of non-thermal features of the solar wind. Usually these estimates are based on local plasma properties at the point of observation. Here we improve the method of estimation of the collisional age by employing solutions obtained from global three-dimensional magnetohydrodynamics simulations. This enables evaluation of the complete analytical expression for the collisional age without using approximations. The improved estimation of the collisional timescale is compared with turbulence and expansion timescales tomore » assess the relative importance of collisions. The collisional age computed using the approximate formula employed in previous work is compared with the improved simulation-based calculations to examine the validity of the simplified formula. We also develop an analytical expression for the evaluation of the collisional age and we find good agreement between the numerical and analytical results. Finally, we briefly discuss the implications for an improved estimation of collisionality along spacecraft trajectories, including Solar Probe Plus.« less

  7. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  8. Stationary echo canceling in velocity estimation by time-domain cross-correlation.

    PubMed

    Jensen, J A

    1993-01-01

    The application of stationary echo canceling to ultrasonic estimation of blood velocities using time-domain cross-correlation is investigated. Expressions are derived that show the influence from the echo canceler on the signals that enter the cross-correlation estimator. It is demonstrated that the filtration results in a velocity-dependent degradation of the signal-to-noise ratio. An analytic expression is given for the degradation for a realistic pulse. The probability of correct detection at low signal-to-noise ratios is influenced by signal-to-noise ratio, transducer bandwidth, center frequency, number of samples in the range gate, and number of A-lines employed in the estimation. Quantitative results calculated by a simple simulation program are given for the variation in probability from these parameters. An index reflecting the reliability of the estimate at hand can be calculated from the actual cross-correlation estimate by a simple formula and used in rejecting poor estimates or in displaying the reliability of the velocity estimated.

  9. A Simple Model of Global Aerosol Indirect Effects

    NASA Technical Reports Server (NTRS)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  10. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part I

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji; Sano, Kousuke

    This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.

  11. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  12. A simple analytical model of coupled single flow channel over porous electrode in vanadium redox flow battery with serpentine flow channel

    NASA Astrophysics Data System (ADS)

    Ke, Xinyou; Alexander, J. Iwan D.; Prahl, Joseph M.; Savinell, Robert F.

    2015-08-01

    A simple analytical model of a layered system comprised of a single passage of a serpentine flow channel and a parallel underlying porous electrode (or porous layer) is proposed. This analytical model is derived from Navier-Stokes motion in the flow channel and Darcy-Brinkman model in the porous layer. The continuities of flow velocity and normal stress are applied at the interface between the flow channel and the porous layer. The effects of the inlet volumetric flow rate, thickness of the flow channel and thickness of a typical carbon fiber paper porous layer on the volumetric flow rate within this porous layer are studied. The maximum current density based on the electrolyte volumetric flow rate is predicted, and found to be consistent with reported numerical simulation. It is found that, for a mean inlet flow velocity of 33.3 cm s-1, the analytical maximum current density is estimated to be 377 mA cm-2, which compares favorably with experimental result reported by others of ∼400 mA cm-2.

  13. Impact of correlated magnetic noise on the detection of stochastic gravitational waves: Estimation based on a simple analytical model

    NASA Astrophysics Data System (ADS)

    Himemoto, Yoshiaki; Taruya, Atsushi

    2017-07-01

    After the first direct detection of gravitational waves (GW), detection of the stochastic background of GWs is an important next step, and the first GW event suggests that it is within the reach of the second-generation ground-based GW detectors. Such a GW signal is typically tiny and can be detected by cross-correlating the data from two spatially separated detectors if the detector noise is uncorrelated. It has been advocated, however, that the global magnetic fields in the Earth-ionosphere cavity produce the environmental disturbances at low-frequency bands, known as Schumann resonances, which potentially couple with GW detectors. In this paper, we present a simple analytical model to estimate its impact on the detection of stochastic GWs. The model crucially depends on the geometry of the detector pair through the directional coupling, and we investigate the basic properties of the correlated magnetic noise based on the analytic expressions. The model reproduces the major trend of the recently measured global correlation between the GW detectors via magnetometer. The estimated values of the impact of correlated noise also match those obtained from the measurement. Finally, we give an implication to the detection of stochastic GWs including upcoming detectors, KAGRA and LIGO India. The model suggests that LIGO Hanford-Virgo and Virgo-KAGRA pairs are possibly less sensitive to the correlated noise and can achieve a better sensitivity to the stochastic GW signal in the most pessimistic case.

  14. A simple model for strong ground motions and response spectra

    USGS Publications Warehouse

    Safak, Erdal; Mueller, Charles; Boatwright, John

    1988-01-01

    A simple model for the description of strong ground motions is introduced. The model shows that response spectra can be estimated by using only four parameters of the ground motion, the RMS acceleration, effective duration and two corner frequencies that characterize the effective frequency band of the motion. The model is windowed band-limited white noise, and is developed by studying the properties of two functions, cumulative squared acceleration in the time domain, and cumulative squared amplitude spectrum in the frequency domain. Applying the methods of random vibration theory, the model leads to a simple analytical expression for the response spectra. The accuracy of the model is checked by using the ground motion recordings from the aftershock sequences of two different earthquakes and simulated accelerograms. The results show that the model gives a satisfactory estimate of the response spectra.

  15. A Simple Method for Deriving the Confidence Regions for the Penalized Cox’s Model via the Minimand Perturbation†

    PubMed Central

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496

  16. A Simple Method for Deriving the Confidence Regions for the Penalized Cox's Model via the Minimand Perturbation.

    PubMed

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.

  17. Replica and extreme-value analysis of the Jarzynski free-energy estimator

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo; Ritort, Felix

    2008-03-01

    We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.

  18. Performance Analysis of Blind Subspace-Based Signature Estimation Algorithms for DS-CDMA Systems with Unknown Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zarifi, Keyvan; Gershman, Alex B.

    2006-12-01

    We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.

  19. Development of a new semi-analytical model for cross-borehole flow experiments in fractured media

    USGS Publications Warehouse

    Roubinet, Delphine; Irving, James; Day-Lewis, Frederick D.

    2015-01-01

    Analysis of borehole flow logs is a valuable technique for identifying the presence of fractures in the subsurface and estimating properties such as fracture connectivity, transmissivity and storativity. However, such estimation requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. In this paper, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. In comparison with existing models, our approach presents major improvements in terms of computational expense and potential adaptation to a variety of fracture and experimental configurations. After derivation of the formulation, we demonstrate its application in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as for field-data analysis to investigate fracture connectivity and estimate fracture hydraulic properties. These applications provide important insights regarding (i) the strong sensitivity of fracture property estimates to the overall connectivity of the system; and (ii) the non-uniqueness of the corresponding inverse problem for realistic fracture configurations.

  20. Estimate of Cosmic Muon Background for Shallow Underground Neutrino Detectors

    NASA Astrophysics Data System (ADS)

    Casimiro, E.; Simão, F. R. A.; Anjos, J. C.

    One of the severe limitations in detecting neutrino signals from nuclear reactors is that the copious cosmic ray background imposes the use of a time veto upon the passage of the muons to reduce the number of fake signals due to muon-induced spallation neutrons. For this reason neutrino detectors are usually located underground, with a large overburden. However there are practical limitations that do restrain from locating the detectors at large depths underground. In order to decide the depth underground at which the Neutrino Angra Detector (currently in preparation) should be installed, an estimate of the cosmogenic background in the detector as a function of the depth is required. We report here a simple analytical estimation of the muon rates in the detector volume for different plausible depths, assuming a simple plain overburden geometry. We extend the calculation to the case of the San Onofre neutrino detector and to the case of the Double Chooz neutrino detector, where other estimates or measurements have been performed. Our estimated rates are consistent.

  1. Conservative Analytical Collision Probabilities for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  2. Conservative Analytical Collision Probability for Design of Orbital Formations

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  3. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  4. SIMPLE MODEL OF ICE SEGREGATION USING AN ANALYTIC FUNCTION TO MODEL HEAT AND SOIL-WATER FLOW.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1984-01-01

    This paper reports on the development of a simple two-dimensional model of coupled heat and soil-water flow in freezing or thawing soil. The model also estimates ice-segregation (frost-heave) evolution. Ice segregation in soil results from water drawn into a freezing zone by hydraulic gradients created by the freezing of soil-water. Thus, with a favorable balance between the rate of heat extraction and the rate of water transport to a freezing zone, segregated ice lenses may form.

  5. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  6. Prediction of Petermann I and II Spot Sizes for Single-mode Dispersion-shifted and Dispersion-flattened Fibers by a Simple Technique

    NASA Astrophysics Data System (ADS)

    Kamila, Kiranmay; Panda, Anup Kumar; Gangopadhyay, Sankar

    2013-09-01

    Employing the series expression for the fundamental modal field of dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers, we present simple but accurate analytical expressions for Petermann I and II spot sizes of such kind of fibers. Choosing some typical dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers as examples, we show that our estimations match excellently with the exact numerical results. The evaluation of the concerned propagation parameters by our formalism needs very little computations. This accurate but simple formalism will benefit the system engineers working in the field of all optical technology.

  7. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  8. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less

  9. Upper and lower bounds for the speed of pulled fronts with a cut-off

    NASA Astrophysics Data System (ADS)

    Benguria, R. D.; Depassier, M. C.; Loss, M.

    2008-02-01

    We establish rigorous upper and lower bounds for the speed of pulled fronts with a cut-off. For all reaction terms of KPP type a simple analytic upper bound is given. The lower bounds however depend on details of the reaction term. For a small cut-off parameter the two leading order terms in the asymptotic expansion of the upper and lower bounds coincide and correspond to the Brunet-Derrida formula. For large cut-off parameters the bounds do not coincide and permit a simple estimation of the speed of the front.

  10. Modulational estimate for the maximal Lyapunov exponent in Fermi-Pasta-Ulam chains

    NASA Astrophysics Data System (ADS)

    Dauxois, Thierry; Ruffo, Stefano; Torcini, Alessandro

    1997-12-01

    In the framework of the Fermi-Pasta-Ulam (FPU) model, we show a simple method to give an accurate analytical estimation of the maximal Lyapunov exponent at high energy density. The method is based on the computation of the mean value of the modulational instability growth rates associated to unstable modes. Moreover, we show that the strong stochasticity threshold found in the β-FPU system is closely related to a transition in tangent space, the Lyapunov eigenvector being more localized in space at high energy.

  11. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    NASA Astrophysics Data System (ADS)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  12. Thermodynamics of Thomas-Fermi screened Coulomb systems

    NASA Technical Reports Server (NTRS)

    Firey, B.; Ashcroft, N. W.

    1977-01-01

    We obtain in closed analytic form, estimates for the thermodynamic properties of classical fluids with pair potentials of Yukawa type, with special reference to dense fully ionized plasmas with Thomas-Fermi or Debye-Hueckel screening. We further generalize the hard-sphere perturbative approach used for similarly screened two-component mixtures, and demonstrate phase separation in this simple model of a liquid mixture of metallic helium and hydrogen.

  13. Generation of short electron bunches by a laser pulse crossing a sharp boundary of inhomogeneous plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, S. V., E-mail: svk-IVTAN@yandex.ru

    The formation of short electron bunches during the passage of a laser pulse of relativistic intensity through a sharp boundary of semi-bounded plasma has been analytically studied. It is shown in one-dimensional geometry that one physical mechanism that is responsible for the generation of electron bunches is their self-injection into the wake field of a laser pulse, which occurs due to the mixing of electrons during the action of the laser pulse on plasma. Simple analytic relationships are obtained that can be used for estimating the length and charge of an electron bunch and the spread of electron energies inmore » the bunch. The results of the analytical investigation are confirmed by data from numerical simulations.« less

  14. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  15. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    NASA Astrophysics Data System (ADS)

    Luo, Ning; Illman, Walter A.

    2016-09-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  16. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  17. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  18. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  19. Analytical Studies on the Synchronization of a Network of Linearly-Coupled Simple Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Sivaganesh, G.; Arulgnanam, A.; Seethalakshmi, A. N.; Selvaraj, S.

    2018-05-01

    We present explicit generalized analytical solutions for a network of linearly-coupled simple chaotic systems. Analytical solutions are obtained for the normalized state equations of a network of linearly-coupled systems driven by a common chaotic drive system. Two parameter bifurcation diagrams revealing the various hidden synchronization regions, such as complete, phase and phase-lag synchronization are identified using the analytical results. The synchronization dynamics and their stability are studied using phase portraits and the master stability function, respectively. Further, experimental results for linearly-coupled simple chaotic systems are presented to confirm the analytical results. The synchronization dynamics of a network of chaotic systems studied analytically is reported for the first time.

  20. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  1. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  2. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  3. The varieties of symmetric stellar rings and radial caustics in galaxy disks

    NASA Technical Reports Server (NTRS)

    Struck-Marcell, Curtis; Lotan, Pnina

    1990-01-01

    Numerical, restricted three-body and analytic calculations are used to study the formation and propagation of cylindrically symmetric stellar ring waves in galaxy disks. It is shown that such waves can evolve in a variety of ways, depending on the amplitude of the perturbation and the potential of the target galaxy. Rings can thicken as they propagate outward, remain at a nearly constant width, or be pinched off at large radii. Multiple, closely spaced rings can result from a low-amplitude collision, while an outer ring can appear well-separated from overlapping inner rings or an apparent lens structure in halo-dominated potentials. All the single-encounter rings consist of paired fold caustics. The simple, impulsive, kinematic oscillation equations appear to provide a remarkably accurate model of the numerical simulations. Simple analytic approximations to these equations permit very good estimates of oscillation periods and amplitudes, the evolution of ring widths, and ring birth and propagation characteristics.

  4. An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.

    PubMed

    Bradley, Stuart

    2015-11-20

    Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.

  5. Extending the excluded volume for percolation threshold estimates in polydisperse systems: The binary disk system

    DOE PAGES

    Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...

    2017-06-01

    For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less

  6. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  7. Selecting Statistical Procedures for Quality Control Planning Based on Risk Management.

    PubMed

    Yago, Martín; Alcover, Silvia

    2016-07-01

    According to the traditional approach to statistical QC planning, the performance of QC procedures is assessed in terms of its probability of rejecting an analytical run that contains critical size errors (PEDC). Recently, the maximum expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition [Max E(NUF)], has been proposed as an alternative QC performance measure because it is more related to the current introduction of risk management concepts for QC planning in the clinical laboratory. We used a statistical model to investigate the relationship between PEDC and Max E(NUF) for simple QC procedures widely used in clinical laboratories and to construct charts relating Max E(NUF) with the capability of the analytical process that allow for QC planning based on the risk of harm to a patient due to the report of erroneous results. A QC procedure shows nearly the same Max E(NUF) value when used for controlling analytical processes with the same capability, and there is a close relationship between PEDC and Max E(NUF) for simple QC procedures; therefore, the value of PEDC can be estimated from the value of Max E(NUF) and vice versa. QC procedures selected by their high PEDC value are also characterized by a low value for Max E(NUF). The PEDC value can be used for estimating the probability of patient harm, allowing for the selection of appropriate QC procedures in QC planning based on risk management. © 2016 American Association for Clinical Chemistry.

  8. On the Application of Euler Deconvolution to the Analytic Signal

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Pasteka, R.

    2005-05-01

    In the last years papers on Euler deconvolution (ED) used formulations that accounted for the unknown background field, allowing to consider the structural index (N) an unknown to be solved for, together with the source coordinates. Among them, Hsu (2002) and Fedi and Florio (2002) independently pointed out that the use of an adequate m-order derivative of the field, instead than the field itself, allowed solving for both N and source position. For the same reason, Keating and Pilkington (2004) proposed the ED of the analytic signal. A function being analyzed by ED must be homogeneous but also harmonic, because it must be possible to compute its vertical derivative, as well known from potential field theory. Huang et al. (1995), demonstrated that analytic signal is a homogeneous function, but, for instance, it is rather obvious that the magnetic field modulus (corresponding to the analytic signal of a gravity field) is not a harmonic function (e.g.: Grant & West, 1965). Thus, it appears that a straightforward application of ED to the analytic signal is not possible because a vertical derivation of this function is not correct by using standard potential fields analysis tools. In this note we want to theoretically and empirically check what kind of error are caused in the ED by such wrong assumption about analytic signal harmonicity. We will discuss results on profile and map synthetic data, and use a simple method to compute the vertical derivative of non-harmonic functions measured on a horizontal plane. Our main conclusions are: 1. To approximate a correct evaluation of the vertical derivative of a non-harmonic function it is useful to compute it with finite-difference, by using upward continuation. 2. We found that the errors on the vertical derivative computed as if the analytic signal was harmonic reflects mainly on the structural index estimate; these errors can mislead an interpretation even though the depth estimates are almost correct. 3. Consistent estimates of depth and S.I. are instead obtained by using a finite-difference vertical derivative of the analytic signal. 4. Analysis of a case history confirms the strong error in the estimation of structural index if the analytic signal is treated as an harmonic function.

  9. Analytical method for the fast time-domain reconstruction of fluorescent inclusions in vitro and in vivo.

    PubMed

    Han, Sung-Ho; Farshchi-Heydari, Salman; Hall, David J

    2010-01-20

    A novel time-domain optical method to reconstruct the relative concentration, lifetime, and depth of a fluorescent inclusion is described. We establish an analytical method for the estimations of these parameters for a localized fluorescent object directly from the simple evaluations of continuous wave intensity, exponential decay, and temporal position of the maximum of the fluorescence temporal point-spread function. Since the more complex full inversion process is not involved, this method permits a robust and fast processing in exploring the properties of a fluorescent inclusion. This method is confirmed by in vitro and in vivo experiments. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Estimation of the curvature of the solid liquid interface during Bridgman crystal growth

    NASA Astrophysics Data System (ADS)

    Barat, Catherine; Duffar, Thierry; Garandet, Jean-Paul

    1998-11-01

    An approximate solution for the solid/liquid interface curvature due to the crucible effect in crystal growth is derived from simple heat flux considerations. The numerical modelling of the problem carried out with the help of the finite element code FIDAP supports the predictions of our analytical expression and allows to identify its range of validity. Experimental interface curvatures, measured in gallium antimonide samples grown by the vertical Bridgman method, are seen to compare satisfactorily to analytical and numerical results. Other literature data are also in fair agreement with the predictions of our models in the case where the amount of heat carried by the crucible is small compared to the overall heat flux.

  11. Burden Calculator: a simple and open analytical tool for estimating the population burden of injuries.

    PubMed

    Bhalla, Kavi; Harrison, James E

    2016-04-01

    Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  13. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models

    PubMed Central

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005

  14. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    PubMed

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  15. Theoretical basis to measure the impact of short-lasting control of an infectious disease on the epidemic peak

    PubMed Central

    2011-01-01

    Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441

  16. Evaluation of trace analyte identification in complex matrices by low-resolution gas chromatography--Mass spectrometry through signal simulation.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-04-01

    The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Urinary 24-h creatinine excretion in adults and its use as a simple tool for the estimation of daily urinary analyte excretion from analyte/creatinine ratios in populations.

    PubMed

    Johner, S A; Boeing, H; Thamm, M; Remer, T

    2015-12-01

    The assessment of urinary excretion of specific nutrients (e.g. iodine, sodium) is frequently used to monitor a population's nutrient status. However, when only spot urines are available, always a risk of hydration-status-dependent dilution effects and related misinterpretations exists. The aim of the present study was to establish mean values of 24-h creatinine excretion widely applicable for an appropriate estimation of 24-h excretion rates of analytes from spot urines in adults. Twenty-four-hour creatinine excretion from the formerly representative cross-sectional German VERA Study (n=1463, 20-79 years old) was analysed. Linear regression analysis was performed to identify the most important influencing factors of creatinine excretion. In a subsample of the German DONALD Study (n=176, 20-29 years old), the applicability of the 24-h creatinine excretion values of VERA for the estimation of 24-h sodium and iodine excretion from urinary concentration measurements was tested. In the VERA Study, mean 24-h creatinine excretion was 15.4 mmol per day in men and 11.1 mmol per day in women, significantly dependent on sex, age, body weight and body mass index. Based on the established 24-h creatinine excretion values, mean 24-h iodine and sodium excretions could be estimated from respective analyte/creatinine concentrations, with average deviations <10% compared with the actual 24-h means. The present mean values of 24-h creatinine excretion are suggested as a useful tool to derive realistic hydration-status-independent average 24-h excretion rates from urinary analyte/creatinine ratios. We propose to apply these creatinine reference means routinely in biomarker-based studies aiming at characterizing the nutrient or metabolite status of adult populations by simply measuring metabolite/creatinine ratios in spot urines.

  18. Simplified, inverse, ejector design tool

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1993-01-01

    A simple lumped parameter based inverse design tool has been developed which provides flow path geometry and entrainment estimates subject to operational, acoustic, and design constraints. These constraints are manifested through specification of primary mass flow rate or ejector thrust, fully-mixed exit velocity, and static pressure matching. Fundamentally, integral forms of the conservation equations coupled with the specified design constraints are combined to yield an easily invertible linear system in terms of the flow path cross-sectional areas. Entrainment is computed by back substitution. Initial comparison with experimental and analogous one-dimensional methods show good agreement. Thus, this simple inverse design code provides an analytically based, preliminary design tool with direct application to High Speed Civil Transport (HSCT) design studies.

  19. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  20. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  1. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  2. Determination of dimethyltryptamine and β-carbolines (ayahuasca alkaloids) in plasma samples by LC-MS/MS.

    PubMed

    Oliveira, Carolina Dizioli Rodrigues; Okai, Guilherme Gonçalves; da Costa, José Luiz; de Almeida, Rafael Menck; Oliveira-Silva, Diogo; Yonamine, Mauricio

    2012-07-01

    Ayahuasca is a psychoactive plant beverage originally used by indigenous people throughout the Amazon Basin, long before its modern use by syncretic religious groups established in Brazil, the USA and European countries. The objective of this study was to develop a method for quantification of dimethyltryptamine and β-carbolines in human plasma samples. The analytes were extracted by means of C18 cartridges and injected into LC-MS/MS, operated in positive ion mode and multiple reaction monitoring. The LOQs obtained for all analytes were below 0.5 ng/ml. By using the weighted least squares linear regression, the accuracy of the analytical method was improved at the lower end of the calibration curve (from 0.5 to 100 ng/ml; r(2)> 0.98). The method proved to be simple, rapid and useful to estimate administered doses for further pharmacological and toxicological investigations of ayahuasca exposure.

  3. Analytical expression for Risken-Nummedal-Graham-Haken instability threshold in quantum cascade lasers.

    PubMed

    Vukovic, N; Radovanovic, J; Milanovic, V; Boiko, D L

    2016-11-14

    We have obtained a closed-form expression for the threshold of Risken-Nummedal-Graham-Haken (RNGH) multimode instability in a Fabry-Pérot (FP) cavity quantum cascade laser (QCL). This simple analytical expression is a versatile tool that can easily be applied in practical situations which require analysis of QCL dynamic behavior and estimation of its RNGH multimode instability threshold. Our model for a FP cavity laser accounts for the carrier coherence grating and carrier population grating as well as their relaxation due to carrier diffusion. In the model, the RNGH instability threshold is analyzed using a second-order bi-orthogonal perturbation theory and we confirm our analytical solution by a comparison with the numerical simulations. In particular, the model predicts a low RNGH instability threshold in QCLs. This agrees very well with experimental data available in the literature.

  4. Small field depth dose profile of 6 MV photon beam in a simple air-water heterogeneity combination: A comparison between anisotropic analytical algorithm dose estimation with thermoluminescent dosimeter dose measurement.

    PubMed

    Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin

    2017-01-01

    To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.

  5. Inadequacy of internal covariance estimation for super-sample covariance

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Kunz, Martin

    2017-08-01

    We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.

  6. Universal behaviour of interoccurrence times between losses in financial markets: An analytical description

    NASA Astrophysics Data System (ADS)

    Ludescher, J.; Tsallis, C.; Bunde, A.

    2011-09-01

    We consider 16 representative financial records (stocks, indices, commodities, and exchange rates) and study the distribution PQ(r) of the interoccurrence times r between daily losses below negative thresholds -Q, for fixed mean interoccurrence time RQ. We find that in all cases, PQ(r) follows the form PQ(r)~1/[(1+(q- 1)βr]1/(q-1), where β and q are universal constants that depend only on RQ, but not on a specific asset. While β depends only slightly on RQ, the q-value increases logarithmically with RQ, q=1+q0 ln(RQ/2), such that for RQ→2, PQ(r) approaches a simple exponential, PQ(r)cong2-r. The fact that PQ does not scale with RQ is due to the multifractality of the financial markets. The analytic form of PQ allows also to estimate both the risk function and the Value-at-Risk, and thus to improve the estimation of the financial risk.

  7. Determination of Minor and Trace Metals in Aluminum and Aluminum Alloys by ICP-AES; Evaluation of the Uncertainty and Limit of Quantitation from Interlaboratory Testing.

    PubMed

    Uemoto, Michihisa; Makino, Masanori; Ota, Yuji; Sakaguchi, Hiromi; Shimizu, Yukari; Sato, Kazuhiro

    2018-01-01

    Minor and trace metals in aluminum and aluminum alloys have been determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) as an interlaboratory testing toward standardization. The trueness of the measured data was successfully investigated to improve the analytical protocols, using certified reference materials of aluminum. Their precision could also be evaluated, feasible to estimate the uncertainties separately. The accuracy (trueness and precision) of the data were finally in good agreement with the certified values and assigned uncertainties. Repeated measurements of aluminum solutions with different concentrations of the analytes revealed the relative standard deviations of the measurements with concentrations, thus enabling their limits of quantitation. They differed separately and also showed slightly higher values with an aluminum matrix than those without one. In addition, the upper limit of the detectable concentration of silicon with simple acid digestion was estimated to be 0.03 % in the mass fraction.

  8. Analytical estimation on divergence and flutter vibrations of symmetrical three-phase induction stator via field-synchronous coordinates

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Wang, Shiyu; Sun, Wenjia; Xiu, Jie

    2017-01-01

    The electromagnetically induced parametric vibration of the symmetrical three-phase induction stator is examined. While it can be analyzed by an approximate analytical or numerical method, more accurate and simple analytical method is desirable. This work proposes a new method based on the field-synchronous coordinates. A mechanical-electromagnetic coupling model is developed under this frame such that a time-invariant governing equation with gyroscopic term can be developed. With the general vibration theory, the eigenvalue is formulated; the transition curves between the stable and unstable regions, and response are all determined as closed-form expressions of basic mechanical-electromagnetic parameters. The dependence of these parameters on the instability behaviors is demonstrated. The results imply that the divergence and flutter instabilities can occur even for symmetrical motors with balanced, constant amplitude and sinusoidal voltage. To verify the analytical predictions, this work also builds up a time-variant model of the same system under the conventional inertial frame. The Floquét theory is employed to predict the parametric instability and the numerical integration is used to obtain the parametric response. The parametric instability and response are both well compared against those under the field-synchronous coordinates. The proposed field-synchronous coordinates allows a quick estimation on the electromagnetically induced vibration. The convenience offered by the body-fixed coordinates is discussed across various fields.

  9. Cosmological Perturbation Theory and the Spherical Collapse model - I. Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Fosalba, Pablo; Gaztanaga, Enrique

    1998-12-01

    We present a simple and intuitive approximation for solving the perturbation theory (PT) of small cosmic fluctuations. We consider only the spherically symmetric or monopole contribution to the PT integrals, which yields the exact result for tree-graphs (i.e. at leading order). We find that the non-linear evolution in Lagrangian space is then given by a simple local transformation over the initial conditions, although it is not local in Euler space. This transformation is found to be described by the spherical collapse (SC) dynamics, as it is the exact solution in the shearless (and therefore local) approximation in Lagrangian space. Taking advantage of this property, it is straightforward to derive the one-point cumulants, xi_J, for both the unsmoothed and smoothed density fields to arbitrary order in the perturbative regime. To leading-order this reproduces, and provides us with a simple explanation for, the exact results obtained by Bernardeau. We then show that the SC model leads to accurate estimates for the next corrective terms when compared with the results derived in the exact perturbation theory making use of the loop calculations. The agreement is within a few per cent for the hierarchical ratios S_J=xi_J/xi^J-1_2. We compare our analytic results with N-body simulations, which turn out to be in very good agreement up to scales where sigma~1. A similar treatment is presented to estimate higher order corrections in the Zel'dovich approximation. These results represent a powerful and readily usable tool to produce analytical predictions that describe the gravitational clustering of large-scale structure in the weakly non-linear regime.

  10. Type-curve estimation of statistical heterogeneity

    NASA Astrophysics Data System (ADS)

    Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica

    2004-04-01

    The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.

  11. Optimized theory for simple and molecular fluids.

    PubMed

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  12. Estimating the "impact" of out-of-home placement on child well-being: approaching the problem of selection bias.

    PubMed

    Berger, Lawrence M; Bruch, Sarah K; Johnson, Elizabeth I; James, Sigrid; Rubin, David

    2009-01-01

    This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized change, simple change, difference-in-difference, and fixed effects models. Models were estimated using the full sample and a matched sample generated by propensity scoring. Although results from the unmatched OLS and residualized change models suggested that out-of-home placement is associated with increased child behavior problems, estimates from models that more rigorously adjust for selection bias indicated that placement has little effect on children's cognitive skills or behavior problems.

  13. Monte Carlo investigation of transient acoustic fields in partially or completely bounded medium. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thanedar, B. D.

    1972-01-01

    A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.

  14. Demodulation of messages received with low signal to noise ratio

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Quignon, T.; Romann, B.

    The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.

  15. Statistical power for nonequivalent pretest-posttest designs. The impact of change-score versus ANCOVA models.

    PubMed

    Oakes, J M; Feldman, H A

    2001-02-01

    Nonequivalent controlled pretest-posttest designs are central to evaluation science, yet no practical and unified approach for estimating power in the two most widely used analytic approaches to these designs exists. This article fills the gap by presenting and comparing useful, unified power formulas for ANCOVA and change-score analyses, indicating the implications of each on sample-size requirements. The authors close with practical recommendations for evaluators. Mathematical details and a simple spreadsheet approach are included in appendices.

  16. Econometric model for age- and population-dependent radiation exposures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandquist, G.M.; Slaughter, D.M.; Rogers, V.C.

    1991-01-01

    The economic impact associated with ionizing radiation exposures in a given human population depends on numerous factors including the individual's mean economic status as a function age, the age distribution of the population, the future life expectancy at each age, and the latency period for the occurrence of radiation-induced health effects. A simple mathematical model has been developed that provides an analytical methodology for estimating the societal econometrics associated with radiation effects are to be assessed and compared for economic evaluation.

  17. "Light sail" acceleration reexamined.

    PubMed

    Macchi, Andrea; Veghini, Silvia; Pegoraro, Francesco

    2009-08-21

    The dynamics of the acceleration of ultrathin foil targets by the radiation pressure of superintense, circularly polarized laser pulses is investigated by analytical modeling and particle-in-cell simulations. By addressing self-induced transparency and charge separation effects, it is shown that for "optimal" values of the foil thickness only a thin layer at the rear side is accelerated by radiation pressure. The simple "light sail" model gives a good estimate of the energy per nucleon, but overestimates the conversion efficiency of laser energy into monoenergetic ions.

  18. Franck-Condon factor formulae for astrophysical and other molecules

    NASA Technical Reports Server (NTRS)

    Nicholls, R. W.

    1981-01-01

    Simple closed-form, approximate, analytic expressions for Franck-Condon factors are given. They provide reliable estimates for Franck-Condon factor arrays for molecular band systems for which only vibrational-frequency, equilibrium internuclear separation and reduced mass values are known, as is often the case for astrophysically interesting molecules such as CeO, CoH, CrH, CrO, CuH, GeH, LaO, NiH, SnH, and ZnH for band systems of which Franck-Condon arrays have been calculated.

  19. Radiative transitions from Rydberg states of lithium atoms in a blackbody radiation environment

    NASA Astrophysics Data System (ADS)

    Glukhov, I. L.; Ovsiannikov, V. D.

    2012-05-01

    The radiative widths induced by blackbody radiation (BBR) were investigated for Rydberg states with principal quantum number up to n = 1000 in S-, P- and D-series of the neutral lithium atom at temperatures T = 100-3000 K. The rates of BBR-induced decays and excitations were compared with the rates of spontaneous decays. Simple analytical approximations are proposed for accurate estimations of the ratio of thermally induced decay (excitation) rates to spontaneous decay rates in wide ranges of states and temperatures.

  20. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  1. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  2. The global frequency-wave number spectrum of oceanic variability estimated from TOPEX/POSEIDON altimetric measurements. Volume 100, No. C12; The Journal of Geophysical Research

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1995-01-01

    Two years of altimetric data from the TOPEX/POSEIDON spacecraft have been used to produce preliminary estimates of the space and time spectra of global variability for both sea surface height and slope. The results are expressed in terms of both degree variances from spherical harmonic expansions and in along-track wavenumbers. Simple analytic approximations both in terms of piece-wise power laws and Pade fractions are provided for comparison with independent measurements and for easy use of the results. A number of uses of such spectra exist, including the possibility of combining the altimetric data with other observations, predictions of spatial coherences, and the estimation of the accuracy of apparent secular trends in sea level.

  3. The stationary sine-Gordon equation on metric graphs: Exact analytical solutions for simple topologies

    NASA Astrophysics Data System (ADS)

    Sabirov, K.; Rakhmanov, S.; Matrasulov, D.; Susanto, H.

    2018-04-01

    We consider the stationary sine-Gordon equation on metric graphs with simple topologies. Exact analytical solutions are obtained for different vertex boundary conditions. It is shown that the method can be extended for tree and other simple graph topologies. Applications of the obtained results to branched planar Josephson junctions and Josephson junctions with tricrystal boundaries are discussed.

  4. A simple analytical method to estimate all exit parameters of a cross-flow air dehumidifier using liquid desiccant.

    PubMed

    Bassuoni, M M

    2014-03-01

    The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and -5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio.

  5. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  6. Flow over Canopies with Complex Morphologies

    NASA Astrophysics Data System (ADS)

    Rubol, S.; Ling, B.; Battiato, I.

    2017-12-01

    Quantifying and predicting how submerged vegetation affects the velocity profile of riverine systems is crucial in ecohydraulics to properly assess the water quality and ecological functions or rivers. The state of the art includes a plethora of models to study the flow and transport over submerged canopies. However, most of them are validated against data collected in flume experiments with rigid cylinders. With the objective of investigating the capability of a simple analytical solution for vegetated flow to reproduce and predict the velocity profile of complex shaped flexible canopies, we use the flow model proposed by Battiato and Rubol [WRR 2013] as the analytical approximation of the mean velocity profile above and within the canopy layer. This model has the advantages (i) to threat the canopy layer as a porous medium, whose geometrical properties are associated with macroscopic effective permeability and (ii) to use input parameters that can be estimated by remote sensing techniques, such us the heights of the water level and the canopy. The analytical expressions for the average velocity profile and the discharge are tested against data collected across a wide range of canopy morphologies commonly encountered in riverine systems, such as grasses, woody vegetation and bushes. Results indicate good agreement between the analytical expressions and the data for both simple and complex plant geometry shapes. The rescaled low submergence velocities in the canopy layer followed the same scaling found in arrays of rigid cylinders. In addition, for the dataset analyzed, the Darcy friction factor scaled with the inverse of the bulk Reynolds number multiplied by the ratio of the fluid to turbulent viscosity.

  7. Robust estimation of cerebral hemodynamics in neonates using multilayered diffusion model for normal and oblique incidences

    NASA Astrophysics Data System (ADS)

    Steinberg, Idan; Harbater, Osnat; Gannot, Israel

    2014-07-01

    The diffusion approximation is useful for many optical diagnostics modalities, such as near-infrared spectroscopy. However, the simple normal incidence, semi-infinite layer model may prove lacking in estimation of deep-tissue optical properties such as required for monitoring cerebral hemodynamics, especially in neonates. To answer this need, we present an analytical multilayered, oblique incidence diffusion model. Initially, the model equations are derived in vector-matrix form to facilitate fast and simple computation. Then, the spatiotemporal reflectance predicted by the model for a complex neonate head is compared with time-resolved Monte Carlo (TRMC) simulations under a wide range of physiologically feasible parameters. The high accuracy of the multilayer model is demonstrated in that the deviation from TRMC simulations is only a few percent even under the toughest conditions. We then turn to solve the inverse problem and estimate the oxygen saturation of deep brain tissues based on the temporal and spatial behaviors of the reflectance. Results indicate that temporal features of the reflectance are more sensitive to deep-layer optical parameters. The accuracy of estimation is shown to be more accurate and robust than the commonly used single-layer diffusion model. Finally, the limitations of such approaches are discussed thoroughly.

  8. Analytical Models of the Transport of Deep-Well Injectate at the North District Wastewater Treatment Plant, Miami-Dade County, Florida, U.S.A

    NASA Astrophysics Data System (ADS)

    King, J. N.; Walsh, V.; Cunningham, K. J.; Evans, F. S.; Langevin, C. D.; Dausman, A.

    2009-12-01

    The Miami-Dade Water and Sewer Department (MDWASD) injects buoyant effluent from the North District Wastewater Treatment Plant (NDWWTP) through four Class I injection wells into the Boulder Zone---a saline (35 parts per thousand) and transmissive (105 to 106 square meters per day) hydrogeologic unit located approximately 1000 meters below land surface. Miami-Dade County is located in southeast Florida, U.S.A. Portions of the Floridan and Biscayne aquifers are located above the Boulder Zone. The Floridan and Biscayne aquifers---underground sources of drinking water---are protected by U.S. Federal Laws and Regulations, Florida Statutes, and Miami-Dade County ordinances. In 1998, MDWASD began to observe effluent constituents within the Floridan aquifer. Continuous-source and impulse-source analytical models for advective and diffusive transport of effluent are used in the present work to test contaminant flow-path hypotheses, suggest transport mechanisms, and estimate dispersivity. MDWASD collected data in the Floridan aquifer between 1996 and 2007. A parameter estimation code is used to optimize analytical model parameters by fitting model data to collected data. These simple models will be used to develop conceptual and numerical models of effluent transport at the NDWWTP, and in the vicinity of the NDWWTP.

  9. An Approximate Solution to the Equation of Motion for Large-Angle Oscillations of the Simple Pendulum with Initial Velocity

    ERIC Educational Resources Information Center

    Johannessen, Kim

    2010-01-01

    An analytic approximation of the solution to the differential equation describing the oscillations of a simple pendulum at large angles and with initial velocity is discussed. In the derivation, a sinusoidal approximation has been applied, and an analytic formula for the large-angle period of the simple pendulum is obtained, which also includes…

  10. Semi-analytical model of cross-borehole flow experiments for fractured medium characterization

    NASA Astrophysics Data System (ADS)

    Roubinet, D.; Irving, J.; Day-Lewis, F. D.

    2014-12-01

    The study of fractured rocks is extremely important in a wide variety of research fields where the fractures and faults can represent either rapid access to some resource of interest or potential pathways for the migration of contaminants in the subsurface. Identification of their presence and determination of their properties are critical and challenging tasks that have led to numerous fracture characterization methods. Among these methods, cross-borehole flowmeter analysis aims to evaluate fracture connections and hydraulic properties from vertical-flow-velocity measurements conducted in one or more observation boreholes under forced hydraulic conditions. Previous studies have demonstrated that analysis of these data can provide important information on fracture connectivity, transmissivity, and storativity. Estimating these properties requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. Quantitative analysis of cross-borehole flowmeter experiments, in particular, requires modeling formulations that: (i) can be adapted to a variety of fracture and experimental configurations; (ii) can take into account interactions between the boreholes because their radii of influence may overlap; and (iii) can be readily cast into an inversion framework that allows for not only the estimation of fracture hydraulic properties, but also an assessment of estimation error. To this end, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. Our model addresses the above needs and provides a flexible and computationally efficient semi-analytical framework having strong potential for future adaptation to more complex configurations. The proposed modeling approach is demonstrated in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as in the context of field-data analysis for fracture connectivity and estimation of corresponding hydraulic properties.

  11. Limited analytical capacity for cyanotoxins in developing countries may hide serious environmental health problems: simple and affordable methods may be the answer.

    PubMed

    Pírez, Macarena; Gonzalez-Sapienza, Gualberto; Sienra, Daniel; Ferrari, Graciela; Last, Michael; Last, Jerold A; Brena, Beatriz M

    2013-01-15

    In recent years, the international demand for commodities has prompted enormous growth in agriculture in most South American countries. Due to intensive use of fertilizers, cyanobacterial blooms have become a recurrent phenomenon throughout the continent, but their potential health risk remains largely unknown due to the lack of analytical capacity. In this paper we report the main results and conclusions of more than five years of systematic monitoring of cyanobacterial blooms in 20 beaches of Montevideo, Uruguay, on the Rio de la Plata, the fifth largest basin in the world. A locally developed microcystin ELISA was used to establish a sustainable monitoring program that revealed seasonal peaks of extremely high toxicity, more than one-thousand-fold greater than the WHO limit for recreational water. Comparison with cyanobacterial cell counts and chlorophyll-a determination, two commonly used parameters for indirect estimation of toxicity, showed that such indicators can be highly misleading. On the other hand, the accumulated experience led to the definition of a simple criterion for visual classification of blooms, that can be used by trained lifeguards and technicians to take rapid on-site decisions on beach management. The simple and low cost approach is broadly applicable to risk assessment and risk management in developing countries. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. A simple model for calculating air pollution within street canyons

    NASA Astrophysics Data System (ADS)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  13. Net growth rate of continuum heterogeneous biofilms with inhibition kinetics.

    PubMed

    Gonzo, Elio Emilio; Wuertz, Stefan; Rajal, Veronica B

    2018-01-01

    Biofilm systems can be modeled using a variety of analytical and numerical approaches, usually by making simplifying assumptions regarding biofilm heterogeneity and activity as well as effective diffusivity. Inhibition kinetics, albeit common in experimental systems, are rarely considered and analytical approaches are either lacking or consider effective diffusivity of the substrate and the biofilm density to remain constant. To address this obvious knowledge gap an analytical procedure to estimate the effectiveness factor (dimensionless substrate mass flux at the biofilm-fluid interface) was developed for a continuum heterogeneous biofilm with multiple limiting-substrate Monod kinetics to different types of inhibition kinetics. The simple perturbation technique, previously validated to quantify biofilm activity, was applied to systems where either the substrate or the inhibitor is the limiting component, and cases where the inhibitor is a reaction product or the substrate also acts as the inhibitor. Explicit analytical equations are presented for the effectiveness factor estimation and, therefore, the calculation of biomass growth rate or limiting substrate/inhibitor consumption rate, for a given biofilm thickness. The robustness of the new biofilm model was tested using kinetic parameters experimentally determined for the growth of Pseudomonas putida CCRC 14365 on phenol. Several additional cases have been analyzed, including examples where the effectiveness factor can reach values greater than unity, characteristic of systems with inhibition kinetics. Criteria to establish when the effectiveness factor can reach values greater than unity in each of the cases studied are also presented.

  14. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  15. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  16. Improving a regional model using reduced complexity and parameter estimation.

    PubMed

    Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  17. Fabricating Simple Wax Screen-Printing Paper-Based Analytical Devices to Demonstrate the Concept of Limiting Reagent in Acid- Base Reactions

    ERIC Educational Resources Information Center

    Namwong, Pithakpong; Jarujamrus, Purim; Amatatongchai, Maliwan; Chairam, Sanoe

    2018-01-01

    In this article, a low-cost, simple, and rapid fabrication of paper-based analytical devices (PADs) using a wax screen-printing method is reported here. The acid-base reaction is implemented in the simple PADs to demonstrate to students the chemistry concept of a limiting reagent. When a fixed concentration of base reacts with a gradually…

  18. Second-harmonic diffraction from holographic volume grating.

    PubMed

    Nee, Tsu-Wei

    2006-10-01

    The full polarization property of holographic volume-grating enhanced second-harmonic diffraction (SHD) is investigated theoretically. The nonlinear coefficient is derived from a simple atomic model of the material. By using a simple volume-grating model, the SHD fields and Mueller matrices are first derived. The SHD phase-mismatching effect for a thick sample is analytically investigated. This theory is justified by fitting with published experimental SHD data of thin-film samples. The SHD of an existing polymethyl methacrylate (PMMA) holographic 2-mm-thick volume-grating sample is investigated. This sample has two strong coupling linear diffraction peaks and five SHD peaks. The splitting of SHD peaks is due to the phase-mismatching effect. The detector sensitivity and laser power needed to measure these peak signals are quantitatively estimated.

  19. A Simple Simulation Technique for Nonnormal Data with Prespecified Skewness, Kurtosis, and Covariance Matrix.

    PubMed

    Foldnes, Njål; Olsson, Ulf Henning

    2016-01-01

    We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.

  20. Erratum: A Simple, Analytical Model of Collisionless Magnetic Reconnection in a Pair Plasma

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Zenitani, Seiji; Kuznetsova, Masha; Klimas, Alex

    2011-01-01

    The following describes a list of errata in our paper, "A simple, analytical model of collisionless magnetic reconnection in a pair plasma." It supersedes an earlier erratum. We recently discovered an error in the derivation of the outflow-to-inflow density ratio.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prince, K.R.; Schneider, B.J.

    This study obtained estimates of the hydraulic properties of the upper glacial and Magothy aquifers in the East Meadow area for use in analyzing the movement of reclaimed waste water through the aquifer system. This report presents drawdown and recovery data form the two aquifer tests of 1978 and 1985, describes the six methods of analysis used, and summarizes the results of the analyses in tables and graphs. The drawdown and recovery data were analyzed through three simple analytical equations, two curve-matching techniques, and a finite-element radial-flow model. The resulting estimates of hydraulic conductivity, anisotropy, and storage characteristics were usedmore » as initial input values to the finite-element radial-flow model (Reilly, 1984). The flow model was then used to refine the estimates of the aquifer properties by more accurately representing the aquifer geometry and field conditions of the pumping tests.« less

  2. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  3. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  4. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the estimation approach to a simple, accurately modeled system, its effectiveness and accuracy can be evaluated. The same experimental setup can then be used with fluid-filled tanks to further evaluate the effectiveness of the process. Ultimately, the proven process can be applied to the full-sized spinning experimental setup to quickly and accurately determine the slosh model parameters for a particular spacecraft mission. Automating the parameter identification process will save time, allow more changes to be made to proposed designs, and lower the cost in the initial design stages.

  5. Estimated Benefits of Variable-Geometry Wing Camber Control for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Bolonkin, Alexander; Gilyard, Glenn B.

    1999-01-01

    Analytical benefits of variable-camber capability on subsonic transport aircraft are explored. Using aerodynamic performance models, including drag as a function of deflection angle for control surfaces of interest, optimal performance benefits of variable camber are calculated. Results demonstrate that if all wing trailing-edge surfaces are available for optimization, drag can be significantly reduced at most points within the flight envelope. The optimization approach developed and illustrated for flight uses variable camber for optimization of aerodynamic efficiency (maximizing the lift-to-drag ratio). Most transport aircraft have significant latent capability in this area. Wing camber control that can affect performance optimization for transport aircraft includes symmetric use of ailerons and flaps. In this paper, drag characteristics for aileron and flap deflections are computed based on analytical and wind-tunnel data. All calculations based on predictions for the subject aircraft and the optimal surface deflection are obtained by simple interpolation for given conditions. An algorithm is also presented for computation of optimal surface deflection for given conditions. Benefits of variable camber for a transport configuration using a simple trailing-edge control surface system can approach more than 10 percent, especially for nonstandard flight conditions. In the cruise regime, the benefit is 1-3 percent.

  6. CO2 storage capacity estimates from fluid dynamics (Invited)

    NASA Astrophysics Data System (ADS)

    Juanes, R.; MacMinn, C. W.; Szulczewski, M.

    2009-12-01

    We study a sharp-interface mathematical model for the post-injection migration of a plume of CO2 in a deep saline aquifer under the influence of natural groundwater flow, aquifer slope, gravity override, and capillary trapping. The model leads to a nonlinear advection-diffusion equation, where the diffusive term describes the upward spreading of the CO2 against the caprock. We find that the advective terms dominate the flow dynamics even for moderate gravity override. We solve the model analytically in the hyperbolic limit, accounting rigorously for the injection period—using the true end-of-injection plume shape as an initial condition. We extend the model by incorporating the effect of CO2 dissolution into the brine, which—we find—is dominated by convective mixing. This mechanism enters the model as a nonlinear sink term. From a linear stability analysis, we propose a simple estimate of the convective dissolution flux. We then obtain semi-analytic estimates of the maximum plume migration distance and migration time for complete trapping. Our analytical model can be used to estimate the storage capacity (from capillary and dissolution trapping) at the geologic basin scale, and we apply the model to various target formations in the United States. Schematic of the migration of a CO2 plume at the geologic basin scale. During injection, the CO2 forms a plume that is subject to gravity override. At the end of the injection, all the CO2 is mobile. During the post-injection period, the CO2 migrates updip and also driven by regional groundwater flow. At the back end of the plume, where water displaces CO2, the plume leaves a wake or residual CO2 due to capillary trapping. At the bottom of the moving plume, CO2 dissolves into the brine—a process dominated by convective mixing. These two mechanisms—capillary trapping and convective dissolution—reduce the size of the mobile plume as it migrates. In this communication, we present an analytical model that predicts the migration distance and time for complete trapping. This is used to estimate storage capacity of geologic formations at the basin scale.

  7. Unsteady Aerodynamic Force Sensing from Strain Data

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2017-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.

  8. Multi-hole pressure probes to wind tunnel experiments and air data systems

    NASA Astrophysics Data System (ADS)

    Shevchenko, A. M.; Shmakov, A. S.

    2017-10-01

    The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.

  9. Design of Circular, Square, Single, and Multi-layer Induction Coils for Electromagnetic Priming Using Inductance Estimates

    NASA Astrophysics Data System (ADS)

    Fritzsch, Robert; Kennedy, Mark W.; Aune, Ragnhild E.

    2018-02-01

    Special induction coils used for electro magnetic priming of ceramic foam filters in liquid metal filtration have been designed using a combination of analytical and finite element modeling. Relatively simple empirical equations published by Wheeler in 1928 and 1982 have been used during the design process. The equations were found to accurately predict the z-component of the magnetic flux densities of both single- and multi-layer coils as verified both experimentally and by using COMSOL® 5.1 multiphysics simulations.

  10. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  11. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  12. Analytical performance evaluation of SAR ATR with inaccurate or estimated models

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.

    2004-09-01

    Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.

  13. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  14. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  15. Manipulation of the polarization of intense laser beams via optical wave mixing in plasmas

    NASA Astrophysics Data System (ADS)

    Michel, Pierre; Divol, Laurent; Turnbull, David; Moody, John

    2014-10-01

    When intense laser beams overlap in plasmas, the refractive index modulation created by the beat wave via the ponderomotive force can lead to optical wave mixing phenomena reminiscent of those used in crystals and photorefractive materials. Using a vector analysis, we present a full analytical description of the modification of the polarization state of laser beams crossing at arbitrary angles in a plasma. We show that plasmas can be used to provide full control of the polarization state of a laser beam, and give simple analytical estimates and practical considerations for the design of novel photonics devices such as plasma polarizers and plasma waveplates. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  16. Evaluation of generalized degrees of freedom for sparse estimation by replica method

    NASA Astrophysics Data System (ADS)

    Sakata, A.

    2016-12-01

    We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.

  17. A simple analytical method to estimate all exit parameters of a cross-flow air dehumidifier using liquid desiccant

    PubMed Central

    Bassuoni, M.M.

    2013-01-01

    The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and −5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio. PMID:25685485

  18. Evaluation of volatile organic emissions from hazardous waste incinerators.

    PubMed Central

    Sedman, R M; Esparza, J R

    1991-01-01

    Conventional methods of risk assessment typically employed to evaluate the impact of hazardous waste incinerators on public health must rely on somewhat speculative emissions estimates or on complicated and expensive sampling and analytical methods. The limited amount of toxicological information concerning many of the compounds detected in stack emissions also complicates the evaluation of the public health impacts of these facilities. An alternative approach aimed at evaluating the public health impacts associated with volatile organic stack emissions is presented that relies on a screening criterion to evaluate total stack hydrocarbon emissions. If the concentration of hydrocarbons in ambient air is below the screening criterion, volatile emissions from the incinerator are judged not to pose a significant threat to public health. Both the screening criterion and a conventional method of risk assessment were employed to evaluate the emissions from 20 incinerators. Use of the screening criterion always yielded a substantially greater estimate of risk than that derived by the conventional method. Since the use of the screening criterion always yielded estimates of risk that were greater than that determined by conventional methods and measuring total hydrocarbon emissions is a relatively simple analytical procedure, the use of the screening criterion would appear to facilitate the evaluation of operating hazardous waste incinerators. PMID:1954928

  19. Network meta-analysis, electrical networks and graph theory.

    PubMed

    Rücker, Gerta

    2012-12-01

    Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Determination of the microenvironment-pH and charge and size characteristics of amino acids through their electrophoretic mobilities determined by CZE.

    PubMed

    Piaggio, Maria V; Peirotti, Marta B; Deiber, Julio A

    2007-10-01

    Effective electrophoretic mobility data of 20 amino acids reported in the literature are analyzed and interpreted through simple physicochemical models, which are able to provide estimates of coupled quantities like hydrodynamic shape factor, equivalent hydrodynamic radius (size), net charge, actual pK values of ionizing groups, partial charges of ionizing groups, hydration number, and pH near molecule (microenvironment-pH of the BGE). It is concluded that the modeling of the electrophoretic mobility of these analytes requires a careful consideration of hydrodynamic shape coupled to hydration. In the low range of pH studied here, distinctive hydrodynamic behaviors of amino acids are found. For instance, amino acids with basic polar and ionizing side chain remain with prolate shape for pH values varying from 1.99 to 3.2. It is evident that as the pH increases from low values, amino acids get higher hydrations as a consequence each analyte total charge also increases. This result is consistent with the monotonic increase of the hydrodynamic radius, which accounts for both the analyte and the quite immobilized water molecules defining the electrophoretic kinematical unit. It is also found that the actual or effective pK value of the alpha-carboxylic ionizing group of amino acids increases when the pH is changed from 1.99 to 3.2. Several limitations concerning the simple modeling of the electrophoretic mobility of amino acids are presented for further research.

  1. Robust electroencephalogram phase estimation with applications in brain-computer interface systems.

    PubMed

    Seraj, Esmaeil; Sameni, Reza

    2017-03-01

    In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.

  2. Superhydrophobic Analyte Concentration Utilizing Colloid-Pillar Array SERS Substrates

    DOE PAGES

    Wallace, Ryan A.; Charlton, Jennifer J.; Kirchner, Teresa B.; ...

    2014-11-04

    In order to detect a few molecules present in a large sample it is important to know the trace components in the medicinal and environmental sample. Surface enhanced Raman spectroscopy (SERS) is a technique that can be utilized to detect molecules at very low absolute numbers. However, detection at trace concentration levels in real samples requires properly designed delivery and detection systems. Moreover, the following work involves superhydrophobic surfaces that includes silicon pillar arrays formed by lithographic and dewetting protocols. In order to generate the necessary plasmonic substrate for SERS detection, simple and flow stable Ag colloid was added tomore » the functionalized pillar array system via soaking. The pillars are used native and with hydrophobic modification. The pillars provide a means to concentrate analyte via superhydrophobic droplet evaporation effects. A 100-fold concentration of analyte was estimated, with a limit of detection of 2.9 10-12 M for mitoxantrone dihydrochloride. Additionally, analytes were delivered to the surface via a multiplex approach in order to demonstrate an ability to control droplet size and placement for scaled-up applications in real world applications. Finally, a concentration process involving transport and sequestration based on surface treatment selective wicking is demonstrated.« less

  3. Superhydrophobic Analyte Concentration Utilizing Colloid-Pillar Array SERS Substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Ryan A.; Charlton, Jennifer J.; Kirchner, Teresa B.

    In order to detect a few molecules present in a large sample it is important to know the trace components in the medicinal and environmental sample. Surface enhanced Raman spectroscopy (SERS) is a technique that can be utilized to detect molecules at very low absolute numbers. However, detection at trace concentration levels in real samples requires properly designed delivery and detection systems. Moreover, the following work involves superhydrophobic surfaces that includes silicon pillar arrays formed by lithographic and dewetting protocols. In order to generate the necessary plasmonic substrate for SERS detection, simple and flow stable Ag colloid was added tomore » the functionalized pillar array system via soaking. The pillars are used native and with hydrophobic modification. The pillars provide a means to concentrate analyte via superhydrophobic droplet evaporation effects. A 100-fold concentration of analyte was estimated, with a limit of detection of 2.9 10-12 M for mitoxantrone dihydrochloride. Additionally, analytes were delivered to the surface via a multiplex approach in order to demonstrate an ability to control droplet size and placement for scaled-up applications in real world applications. Finally, a concentration process involving transport and sequestration based on surface treatment selective wicking is demonstrated.« less

  4. Understanding Business Analytics

    DTIC Science & Technology

    2015-01-05

    analytics have been used in organizations for a variety of reasons for quite some time; ranging from the simple (generating and understanding business analytics...process. understanding business analytics 3 How well these two components are orchestrated will determine the level of success an organization has in

  5. Estimation of a simple agent-based model of financial markets: An application to Australian stock and foreign exchange data

    NASA Astrophysics Data System (ADS)

    Alfarano, Simone; Lux, Thomas; Wagner, Friedrich

    2006-10-01

    Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.

  6. Simultaneous pre-concentration and separation on simple paper-based analytical device for protein analysis.

    PubMed

    Niu, Ji-Cheng; Zhou, Ting; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Yang, Fu-Quan; Wu, Zhi-Yong

    2018-02-01

    In this work, fast isoelectric focusing (IEF) was successfully implemented on an open paper fluidic channel for simultaneous concentration and separation of proteins from complex matrix. With this simple device, IEF can be finished in 10 min with a resolution of 0.03 pH units and concentration factor of 10, as estimated by color model proteins by smartphone-based colorimetric detection. Fast detection of albumin from human serum and glycated hemoglobin (HBA1c) from blood cell was demonstrated. In addition, off-line identification of the model proteins from the IEF fractions with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was also shown. This PAD IEF is potentially useful either for point of care test (POCT) or biomarker analysis as a cost-effective sample pretreatment method.

  7. Assessment of the Performance of a Dual-Frequency Surface Reference Technique

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Liao, Liang; Tanelli, Simone; Durden, Stephen

    2013-01-01

    The high correlation of the rain-free surface cross sections at two frequencies implies that the estimate of differential path integrated attenuation (PIA) caused by precipitation along the radar beam can be obtained to a higher degree of accuracy than the path-attenuation at either frequency. We explore this finding first analytically and then by examining data from the JPL dual-frequency airborne radar using measurements from the TC4 experiment obtained during July-August 2007. Despite this improvement in the accuracy of the differential path attenuation, solving the constrained dual-wavelength radar equations for parameters of the particle size distribution requires not only this quantity but the single-wavelength path attenuation as well. We investigate a simple method of estimating the single-frequency path attenuation from the differential attenuation and compare this with the estimate derived directly from the surface return.

  8. Enantioselective supercritical fluid chromatography-tandem mass spectrometry method for simultaneous estimation of risperidone and its 9-hydroxyl metabolites in rat plasma.

    PubMed

    Prasad, Thatipamula R; Joseph, Siji; Kole, Prashant; Kumar, Anoop; Subramanian, Murali; Rajagopalan, Sudha; Kr, Prabhakar

    2017-11-01

    Objective of the current work was to develop a 'green chemistry' compliant selective and sensitive supercritical fluid chromatography-tandem mass spectrometry method for simultaneous estimation of risperidone (RIS) and its chiral metabolites in rat plasma. Methodology & results: Agilent 1260 Infinity analytical supercritical fluid chromatography system resolved RIS and its chiral metabolites within runtime of 6 min using a gradient chromatography method. Using a simple protein precipitation sample preparation followed by mass spectrometric detection achieved a sensitivity of 0.92 nM (lower limit of quantification). With linearity over four log units (0.91-7500 nM), the method was found to be selective, accurate, precise and robust. The method was validated and was successfully applied for simultaneous estimation of RIS and 9-hydroxyrisperidone metabolites (R & S individually) after intravenous and per oral administration to rats.

  9. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  10. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  11. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  12. Estimation of ion competition via correlated responsivity offset in linear ion trap mass spectrometry analysis: theory and practical use in the analysis of cyanobacterial hepatotoxin microcystin-LR in extracts of food additives.

    PubMed

    Urban, Jan; Hrouzek, Pavel; Stys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations.

  13. Estimation of Ion Competition via Correlated Responsivity Offset in Linear Ion Trap Mass Spectrometry Analysis: Theory and Practical Use in the Analysis of Cyanobacterial Hepatotoxin Microcystin-LR in Extracts of Food Additives

    PubMed Central

    Hrouzek, Pavel; Štys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations. PMID:23586036

  14. WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL

    EPA Science Inventory

    The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...

  15. Uncertainty in temperature-based determination of time of death

    NASA Astrophysics Data System (ADS)

    Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan

    2018-03-01

    Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.

  16. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates

    USGS Publications Warehouse

    Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.

    2017-01-01

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.

  17. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates.

    PubMed

    Kostich, Mitchell S; Flick, Robert W; Batt, Angela L; Mash, Heath E; Boone, J Scott; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T

    2017-02-01

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Published by Elsevier B.V.

  18. Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.

  19. Analytical model of diffuse reflectance spectrum of skin tissue

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.

    2014-01-01

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.

  20. Shielding Characteristics Using an Ultrasonic Configurable Fan Artificial Noise Source to Generate Modes - Experimental Measurements and Analytical Predictions

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.; Walker, Bruce E.

    2014-01-01

    An Ultrasonic Configurable Fan Artificial Noise Source (UCFANS) was designed, built, and tested in support of the NASA Langley Research Center's 14x22 wind tunnel test of the Hybrid Wing Body (HWB) full 3-D 5.8% scale model. The UCFANS is a 5.8% rapid prototype scale model of a high-bypass turbofan engine that can generate the tonal signature of proposed engines using artificial sources (no flow). The purpose of the program was to provide an estimate of the acoustic shielding benefits possible from mounting an engine on the upper surface of a wing; a flat plate model was used as the shielding surface. Simple analytical simulations were used to preview the radiation patterns - Fresnel knife-edge diffraction was coupled with a dense phased array of point sources to compute shielded and unshielded sound pressure distributions for potential test geometries and excitation modes. Contour plots of sound pressure levels, and integrated power levels, from nacelle alone and shielded configurations for both the experimental measurements and the analytical predictions are presented in this paper.

  1. Collisionless kinetic theory of oblique tearing instabilities

    DOE PAGES

    Baalrud, S. D.; Bhattacharjee, A.; Daughton, W.

    2018-02-15

    The linear dispersion relation for collisionless kinetic tearing instabilities is calculated for the Harris equilibrium. In contrast to the conventional 2D geometry, which considers only modes at the center of the current sheet, modes can span the current sheet in 3D. Modes at each resonant surface have a unique angle with respect to the guide field direction. Both kinetic simulations and numerical eigenmode solutions of the linearized Vlasov-Maxwell equations have recently revealed that standard analytic theories vastly overestimate the growth rate of oblique modes. In this paper, we find that this stabilization is associated with the density-gradient-driven diamagnetic drift. Themore » analytic theories miss this drift stabilization because the inner tearing layer broadens at oblique angles sufficiently far that the assumption of scale separation between the inner and outer regions of boundary-layer theory breaks down. The dispersion relation obtained by numerically solving a single second order differential equation is found to approximately capture the drift stabilization predicted by solutions of the full integro-differential eigenvalue problem. Finally, a simple analytic estimate for the stability criterion is provided.« less

  2. Collisionless kinetic theory of oblique tearing instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baalrud, S. D.; Bhattacharjee, A.; Daughton, W.

    The linear dispersion relation for collisionless kinetic tearing instabilities is calculated for the Harris equilibrium. In contrast to the conventional 2D geometry, which considers only modes at the center of the current sheet, modes can span the current sheet in 3D. Modes at each resonant surface have a unique angle with respect to the guide field direction. Both kinetic simulations and numerical eigenmode solutions of the linearized Vlasov-Maxwell equations have recently revealed that standard analytic theories vastly overestimate the growth rate of oblique modes. In this paper, we find that this stabilization is associated with the density-gradient-driven diamagnetic drift. Themore » analytic theories miss this drift stabilization because the inner tearing layer broadens at oblique angles sufficiently far that the assumption of scale separation between the inner and outer regions of boundary-layer theory breaks down. The dispersion relation obtained by numerically solving a single second order differential equation is found to approximately capture the drift stabilization predicted by solutions of the full integro-differential eigenvalue problem. Finally, a simple analytic estimate for the stability criterion is provided.« less

  3. An analytical particle mover for the charge- and energy-conserving, nonlinearly implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.

    2013-08-01

    We propose a 1D analytical particle mover for the recent charge- and energy-conserving electrostatic particle-in-cell (PIC) algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036]. The approach computes particle orbits exactly for a given piece-wise linear electric field. The resulting PIC algorithm maintains the exact charge and energy conservation properties of the original algorithm, but with improved performance (both in efficiency and robustness against the number of particles and timestep). We demonstrate the advantageous properties of the scheme with a challenging multiscale numerical test case, the ion acoustic wave. Using the analytical mover as a reference, we demonstrate that the choice of error estimator in the Crank-Nicolson mover has significant impact on the overall performance of the implicit PIC algorithm. The generalization of the approach to the multi-dimensional case is outlined, based on a novel and simple charge conserving interpolation scheme.

  4. Collisionless kinetic theory of oblique tearing instabilities

    NASA Astrophysics Data System (ADS)

    Baalrud, S. D.; Bhattacharjee, A.; Daughton, W.

    2018-02-01

    The linear dispersion relation for collisionless kinetic tearing instabilities is calculated for the Harris equilibrium. In contrast to the conventional 2D geometry, which considers only modes at the center of the current sheet, modes can span the current sheet in 3D. Modes at each resonant surface have a unique angle with respect to the guide field direction. Both kinetic simulations and numerical eigenmode solutions of the linearized Vlasov-Maxwell equations have recently revealed that standard analytic theories vastly overestimate the growth rate of oblique modes. We find that this stabilization is associated with the density-gradient-driven diamagnetic drift. The analytic theories miss this drift stabilization because the inner tearing layer broadens at oblique angles sufficiently far that the assumption of scale separation between the inner and outer regions of boundary-layer theory breaks down. The dispersion relation obtained by numerically solving a single second order differential equation is found to approximately capture the drift stabilization predicted by solutions of the full integro-differential eigenvalue problem. A simple analytic estimate for the stability criterion is provided.

  5. Prompt radiation, shielding and induced radioactivity in a high-power 160 MeV proton linac

    NASA Astrophysics Data System (ADS)

    Magistris, Matteo; Silari, Marco

    2006-06-01

    CERN is designing a 160 MeV proton linear accelerator, both for a future intensity upgrade of the LHC and as a possible first stage of a 2.2 GeV superconducting proton linac. A first estimate of the required shielding was obtained by means of a simple analytical model. The source terms and the attenuation lengths used in the present study were calculated with the Monte Carlo cascade code FLUKA. Detailed FLUKA simulations were performed to investigate the contribution of neutron skyshine and backscattering to the expected dose rate in the areas around the linac tunnel. An estimate of the induced radioactivity in the magnets, vacuum chamber, the cooling system and the concrete shield was performed. A preliminary thermal study of the beam dump is also discussed.

  6. Estimation of State Transition Probabilities: A Neural Network Model

    NASA Astrophysics Data System (ADS)

    Saito, Hiroshi; Takiyama, Ken; Okada, Masato

    2015-12-01

    Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.

  7. Simple estimation of linear 1+1 D tsunami run-up

    NASA Astrophysics Data System (ADS)

    Fuentes, M.; Campos, J. A.; Riquelme, S.

    2016-12-01

    An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.

  8. Quantitative Characterization of the Microstructure and Transport Properties of Biopolymer Networks

    PubMed Central

    Jiao, Yang; Torquato, Salvatore

    2012-01-01

    Biopolymer networks are of fundamental importance to many biological processes in normal and tumorous tissues. In this paper, we employ the panoply of theoretical and simulation techniques developed for characterizing heterogeneous materials to quantify the microstructure and effective diffusive transport properties (diffusion coefficient De and mean survival time τ) of collagen type I networks at various collagen concentrations. In particular, we compute the pore-size probability density function P(δ) for the networks and present a variety of analytical estimates of the effective diffusion coefficient De for finite-sized diffusing particles, including the low-density approximation, the Ogston approximation, and the Torquato approximation. The Hashin-Strikman upper bound on the effective diffusion coefficient De and the pore-size lower bound on the mean survival time τ are used as benchmarks to test our analytical approximations and numerical results. Moreover, we generalize the efficient first-passage-time techniques for Brownian-motion simulations in suspensions of spheres to the case of fiber networks and compute the associated effective diffusion coefficient De as well as the mean survival time τ, which is related to nuclear magnetic resonance (NMR) relaxation times. Our numerical results for De are in excellent agreement with analytical results for simple network microstructures, such as periodic arrays of parallel cylinders. Specifically, the Torquato approximation provides the most accurate estimates of De for all collagen concentrations among all of the analytical approximations we consider. We formulate a universal curve for τ for the networks at different collagen concentrations, extending the work of Yeong and Torquato [J. Chem. Phys. 106, 8814 (1997)]. We apply rigorous cross-property relations to estimate the effective bulk modulus of collagen networks from a knowledge of the effective diffusion coefficient computed here. The use of cross-property relations to link other physical properties to the transport properties of collagen networks is also discussed. PMID:22683739

  9. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    NASA Astrophysics Data System (ADS)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  10. Flux tubes and coherence length in the SU(3) vacuum

    NASA Astrophysics Data System (ADS)

    Cea, P.; Cosmai, L.; Cuteri, F.; Papa, A.

    An estimate of the London penetration and coherence lengths in the vacuum of the SU(3) pure gauge theory is given downstream an analysis of the transverse profile of the chromoelectric flux tubes. Within ordinary superconductivity, a simple variational model for the magnitude of the normalized order parameter of an isolated vortex produces an analytic expression for magnetic field and supercurrent density. In the picture of SU(3) vacuum as dual superconductor, this expression provides us with the function that fits the chromoelectric field data. The smearing procedure is used in order to reduce noise.

  11. Changes of instability thresholds of rotor due to bearing misalignments

    NASA Technical Reports Server (NTRS)

    Springer, H.; Ecker, H.; Gunter, E. J.

    1985-01-01

    The influence of bearing misalignment upon the dynamic characteristics of statistically indeterminant rotor bearing systems is investigated. Both bearing loads and stability speed limits of a rotor may be changed significantly by magnitude and direction of bearing misalignment. The useful theory of short journal bearings is introduced and simple analytical expressions, governing the misalignment problem, are carried out. Polar plots for the bearing load capacities and stability maps, describing the speed limit in terms of misalignment, are presented. These plots can be used by the designer to estimate deviations between calculation and experimental data due to misalignment effects.

  12. Using the ratio of the magnetic field to the analytic signal of the magnetic gradient tensor in determining the position of simple shaped magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Karimi, Kurosh; Shirzaditabar, Farzad

    2017-08-01

    The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.

  13. A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.

    1994-01-01

    A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.

  14. Deviation of Long-Period Tides from Equilibrium: Kinematics and Geostrophy

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Ray, Richard D.

    2003-01-01

    New empirical estimates of the long-period fortnightly (Mf) tide obtained from TOPEX/Poseidon (T/P) altimeter data confirm significant basin-scale deviations from equilibrium. Elevations in the low-latitude Pacific have reduced amplitude and lag those in the Atlantic by 30 deg or more. These interbasin amplitude and phase variations are robust features that are reproduced by numerical solutions of the shallow-water equations, even for a constant-depth ocean with schematic interconnected rectangular basins. A simplified analytical model for cooscillating connected basins also reproduces the principal features observed in the empirical solutions. This simple model is largely kinematic. Zonally averaged elevations within a simple closed basin would be nearly in equilibrium with the gravitational potential, except for a constant offset required to conserve mass. With connected basins these offsets are mostly eliminated by interbasin mass flux. Because of rotation, this flux occurs mostly in a narrow boundary layer across the mouth and at the western edge of each basin, and geostrophic balance in this zone supports small residual offsets (and phase shifts) between basins. The simple model predicts that this effect should decrease roughly linearly with frequency, a result that is confirmed by numerical modeling and empirical T/P estimates of the monthly (Mm) tidal constituent. This model also explains some aspects of the anomalous nonisostatic response of the ocean to atmospheric pressure forcing at periods of around 5 days.

  15. Pencil graphite leads as simple amperometric sensors for microchip electrophoresis.

    PubMed

    Natiele Tiago da Silva, Eiva; Marques Petroni, Jacqueline; Gabriel Lucca, Bruno; Souza Ferreira, Valdir

    2017-11-01

    In this work we demonstrate, for the first time, the use of inexpensive commercial pencil graphite leads as simple amperometric sensors for microchip electrophoresis. A PDMS support containing one channel was fabricated through soft lithography and sanded pencil graphite leads were inserted into this channel to be used as working electrodes. The electrochemical and morphological characterization of the sensor was carried out. The graphite electrode was coupled to PDMS microchips in end-channel configuration and electrophoretic experiments were performed using nitrite and ascorbate as probe analytes. The analytes were successfully separated and detected in well-defined peaks with satisfactory resolution using the microfluidic platform proposed. The repeatability of the pencil graphite electrode was satisfactory (RSD values of 1.6% for nitrite and 12.3% for ascorbate, regarding the peak currents) and its lifetime was estimated to be ca. 700 electrophoretic runs over a cost of ca. $ 0.05 per electrode. The limits of detection achieved with this system were 2.8 μM for nitrite and 5.7 μM for ascorbate. For proof of principle, the pencil graphite electrode was employed for the real analysis of well water samples and nitrite was successfully quantified at levels below its maximum contaminant level established in Brazil and US. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Simulation of fatigue crack growth under large scale yielding conditions

    NASA Astrophysics Data System (ADS)

    Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann

    2010-07-01

    A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.

  17. Intensity correction for multichannel hyperpolarized 13C imaging of the heart.

    PubMed

    Dominguez-Viqueira, William; Geraghty, Benjamin J; Lau, Justin Y C; Robb, Fraser J; Chen, Albert P; Cunningham, Charles H

    2016-02-01

    Develop and test an analytic correction method to correct the signal intensity variation caused by the inhomogeneous reception profile of an eight-channel phased array for hyperpolarized (13) C imaging. Fiducial markers visible in anatomical images were attached to the individual coils to provide three dimensional localization of the receive hardware with respect to the image frame of reference. The coil locations and dimensions were used to numerically model the reception profile using the Biot-Savart Law. The accuracy of the coil sensitivity estimation was validated with images derived from a homogenous (13) C phantom. Numerical coil sensitivity estimates were used to perform intensity correction of in vivo hyperpolarized (13) C cardiac images in pigs. In comparison to the conventional sum-of-squares reconstruction, improved signal uniformity was observed in the corrected images. The analytical intensity correction scheme was shown to improve the uniformity of multichannel image reconstruction in hyperpolarized [1-(13) C]pyruvate and (13) C-bicarbonate cardiac MRI. The method is independent of the pulse sequence used for (13) C data acquisition, simple to implement and does not require additional scan time, making it an attractive technique for multichannel hyperpolarized (13) C MRI. © 2015 Wiley Periodicals, Inc.

  18. MOCCA-SURVEY Database. I. Eccentric Black Hole Mergers during Binary–Single Interactions in Globular Clusters

    NASA Astrophysics Data System (ADS)

    Samsing, Johan; Askar, Abbas; Giersz, Mirek

    2018-03-01

    We estimate the population of eccentric gravitational wave (GW) binary black hole (BBH) mergers forming during binary–single interactions in globular clusters (GCs), using ∼800 GC models that were evolved using the MOCCA code for star cluster simulations as part of the MOCCA-Survey Database I project. By re-simulating BH binary–single interactions extracted from this set of GC models using an N-body code that includes GW emission at the 2.5 post-Newtonian level, we find that ∼10% of all the BBHs assembled in our GC models that merge at present time form during chaotic binary–single interactions, and that about half of this sample have an eccentricity >0.1 at 10 Hz. We explicitly show that this derived rate of eccentric mergers is ∼100 times higher than one would find with a purely Newtonian N-body code. Furthermore, we demonstrate that the eccentric fraction can be accurately estimated using a simple analytical formalism when the interacting BHs are of similar mass, a result that serves as the first successful analytical description of eccentric GW mergers forming during three-body interactions in realistic GCs.

  19. Aquatic concentrations of chemical analytes compared to ...

    EPA Pesticide Factsheets

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Purpose: to provide sc

  20. RP-HPLC method for simultaneous estimation of vigabatrin, gamma-aminobutyric acid and taurine in biological samples.

    PubMed

    Police, Anitha; Shankar, Vijay Kumar; Narasimha Murthy, S

    2018-02-15

    Vigabatrin is used as first line drug in treatment of infantile spasms for its potential benefit overweighing risk of causing permanent peripheral visual field defects and retinal damage. Chronic administration of vigabatrin in rats has demonstrated these ocular events are result of GABA accumulation and depletion of taurine levels in retinal tissues. In vigabatrin clinical studies taurine plasma level is considered as biomarker for studying structure and function of retina. The analytical method is essential to monitor taurine levels along with vigabatrin and GABA. A RP-HPLC method has been developed and validated for simultaneous estimation of vigabatrin, GABA and taurine using surrogate matrix. Analytes were extracted from human plasma, rat plasma, retina and brain by simple protein precipitation method and derivatized by naphthalene 2, 3‑dicarboxaldehyde to produce stable fluorescent active isoindole derivatives. The chromatographic analysis was performed on Zorbax Eclipse AAA column using gradient elution profile and eluent was monitored using fluorescence detector. A linear plot of calibration curve was observed in concentration range of 64.6 to 6458, 51.5 to 5150 and 62.5 to 6258 ng/mL for vigabatrin, GABA and taurine, respectively with r 2  ≥ 0.997 for all analytes. The method was successfully applied for estimating levels of vigabatrin and its modulator effect on GABA and taurine levels in rat plasma, brain and retinal tissue. This RP-HPLC method can be applied in clinical and preclinical studies to explore the effect of taurine deficiency and to investigate novel approaches for alleviating vigabatrin induced ocular toxicity. Copyright © 2018. Published by Elsevier B.V.

  1. Meta-Analysis of Rare Binary Adverse Event Data

    PubMed Central

    Bhaumik, Dulal K.; Amatya, Anup; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D.

    2013-01-01

    We examine the use of fixed-effects and random-effects moment-based meta-analytic methods for analysis of binary adverse event data. Special attention is paid to the case of rare adverse events which are commonly encountered in routine practice. We study estimation of model parameters and between-study heterogeneity. In addition, we examine traditional approaches to hypothesis testing of the average treatment effect and detection of the heterogeneity of treatment effect across studies. We derive three new methods, simple (unweighted) average treatment effect estimator, a new heterogeneity estimator, and a parametric bootstrapping test for heterogeneity. We then study the statistical properties of both the traditional and new methods via simulation. We find that in general, moment-based estimators of combined treatment effects and heterogeneity are biased and the degree of bias is proportional to the rarity of the event under study. The new methods eliminate much, but not all of this bias. The various estimators and hypothesis testing methods are then compared and contrasted using an example dataset on treatment of stable coronary artery disease. PMID:23734068

  2. Microbial risk assessment in heterogeneous aquifers: 1. Pathogen transport

    NASA Astrophysics Data System (ADS)

    Molin, S.; Cvetkovic, V.

    2010-05-01

    Pathogen transport in heterogeneous aquifers is investigated for microbial risk assessment. A point source with time-dependent input of pathogens is assumed, exemplified as a simple on-site sanitation installation, intermingled with water supply wells. Any pathogen transmission pathway (realization) to the receptor from a postulated infection hazard is viewed as a random event, with the hydraulic conductivity varying spatially. For aquifers where VAR[lnK] < 1 and the integral scale is finite, we provide relatively simple semianalytical expressions for pathogen transport that incorporate the colloid filtration theory. We test a wide range of Damkohler numbers in order to assess the significance of rate limitations on the aquifer barrier function. Even slow immobile inactivation may notably affect the retention of pathogens. Analytical estimators for microbial peak discharge are evaluated and are shown to be applicable using parameters representative of rotavirus and Hepatitis A with input of 10-20 days duration.

  3. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. The Hydrological Sensitivity to Global Warming and Solar Geoengineering Derived from Thermodynamic Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik

    2015-01-16

    We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many ofmore » the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.« less

  5. Introduction, comparison, and validation of Meta‐Essentials: A free and simple tool for meta‐analysis

    PubMed Central

    van Rhee, Henk; Hak, Tony

    2017-01-01

    We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932

  6. Dual-domain mass-transfer parameters from electrical hysteresis: theory and analytical approach applied to laboratory, synthetic streambed, and groundwater experiments

    USGS Publications Warehouse

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.

    2014-01-01

    Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated “effective” parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.

  7. Dual-domain mass-transfer parameters from electrical hysteresis: Theory and analytical approach applied to laboratory, synthetic streambed, and groundwater experiments

    NASA Astrophysics Data System (ADS)

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.

    2014-10-01

    Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated "effective" parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.

  8. Median of patient results as a tool for assessment of analytical stability.

    PubMed

    Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft; Sölétormos, György

    2015-06-15

    In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable analytical bias based on biological variation. Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. Patient results applied in analytical quality performance control procedures are the most reliable sources of material as they represent the genuine substance of the measurements and therefore circumvent the problems associated with non-commutable materials in external assessment. Patient medians in the monthly monitoring of analytical stability in laboratory medicine are an inexpensive, simple and reliable tool to monitor the steadiness of the analytical practice. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  10. An improved algorithm for balanced POD through an analytic treatment of impulse response tails

    NASA Astrophysics Data System (ADS)

    Tu, Jonathan H.; Rowley, Clarence W.

    2012-06-01

    We present a modification of the balanced proper orthogonal decomposition (balanced POD) algorithm for systems with simple impulse response tails. In this new method, we use dynamic mode decomposition (DMD) to estimate the slowly decaying eigenvectors that dominate the long-time behavior of the direct and adjoint impulse responses. This is done using a new, low-memory variant of the DMD algorithm, appropriate for large datasets. We then formulate analytic expressions for the contribution of these eigenvectors to the controllability and observability Gramians. These contributions can be accounted for in the balanced POD algorithm by simply appending the impulse response snapshot matrices (direct and adjoint, respectively) with particular linear combinations of the slow eigenvectors. Aside from these additions to the snapshot matrices, the algorithm remains unchanged. By treating the tails analytically, we eliminate the need to run long impulse response simulations, lowering storage requirements and speeding up ensuing computations. To demonstrate its effectiveness, we apply this method to two examples: the linearized, complex Ginzburg-Landau equation, and the two-dimensional fluid flow past a cylinder. As expected, reduced-order models computed using an analytic tail match or exceed the accuracy of those computed using the standard balanced POD procedure, at a fraction of the cost.

  11. Superhydrophobic analyte concentration utilizing colloid-pillar array SERS substrates.

    PubMed

    Wallace, Ryan A; Charlton, Jennifer J; Kirchner, Teresa B; Lavrik, Nickolay V; Datskos, Panos G; Sepaniak, Michael J

    2014-12-02

    The ability to detect a few molecules present in a large sample is of great interest for the detection of trace components in both medicinal and environmental samples. Surface enhanced Raman spectroscopy (SERS) is a technique that can be utilized to detect molecules at very low absolute numbers. However, detection at trace concentration levels in real samples requires properly designed delivery and detection systems. The following work involves superhydrophobic surfaces that have as a framework deterministic or stochastic silicon pillar arrays formed by lithographic or metal dewetting protocols, respectively. In order to generate the necessary plasmonic substrate for SERS detection, simple and flow stable Ag colloid was added to the functionalized pillar array system via soaking. Native pillars and pillars with hydrophobic modification are used. The pillars provide a means to concentrate analyte via superhydrophobic droplet evaporation effects. A ≥ 100-fold concentration of analyte was estimated, with a limit of detection of 2.9 × 10(-12) M for mitoxantrone dihydrochloride. Additionally, analytes were delivered to the surface via a multiplex approach in order to demonstrate an ability to control droplet size and placement for scaled-up uses in real world applications. Finally, a concentration process involving transport and sequestration based on surface treatment selective wicking is demonstrated.

  12. Rapid determination of residues of pesticides in honey by µGC-ECD and GC-MS/MS: Method validation and estimation of measurement uncertainty according to document No. SANCO/12571/2013.

    PubMed

    Paoloni, Angela; Alunni, Sabrina; Pelliccia, Alessandro; Pecorelli, Ivan

    2016-01-01

    A simple and straightforward method for simultaneous determination of residues of 13 pesticides in honey samples (acrinathrin, bifenthrin, bromopropylate, cyhalothrin-lambda, cypermethrin, chlorfenvinphos, chlorpyrifos, coumaphos, deltamethrin, fluvalinate-tau, malathion, permethrin and tetradifon) from different pesticide classes has been developed and validated. The analytical method provides dissolution of honey in water and an extraction of pesticide residues by n-Hexane followed by clean-up on a Florisil SPE column. The extract was evaporated and taken up by a solution of an injection internal standard (I-IS), ethion, and finally analyzed by capillary gas chromatography with electron capture detection (GC-µECD). Identification for qualitative purpose was conducted by gas chromatography with triple quadrupole mass spectrometer (GC-MS/MS). A matrix-matched calibration curve was performed for quantitative purposes by plotting the area ratio (analyte/I-IS) against concentration using a GC-µECD instrument. According to document No. SANCO/12571/2013, the method was validated by testing the following parameters: linearity, matrix effect, specificity, precision, trueness (bias) and measurement uncertainty. The analytical process was validated analyzing blank honey samples spiked at levels equal to and greater than 0.010 mg/kg (limit of quantification). All parameters were satisfactorily compared with the values established by document No. SANCO/12571/2013. The analytical performance was verified by participating in eight multi-residue proficiency tests organized by BIPEA, obtaining satisfactory z-scores in all 70 determinations. Measurement uncertainty was estimated according to the top-down approaches described in Appendix C of the SANCO document using the within-laboratory reproducibility relative standard deviation combined with laboratory bias using the proficiency test data.

  13. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    PubMed

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.

  14. Simple analytical model of a thermal diode

    NASA Astrophysics Data System (ADS)

    Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul

    2018-05-01

    Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.

  15. A theoretical perspective on the accuracy of rotational resonance (R 2)-based distance measurements in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Pandey, Manoj Kumar; Ramachandran, Ramesh

    2010-03-01

    The application of solid-state NMR methodology for bio-molecular structure determination requires the measurement of constraints in the form of 13C-13C and 13C-15N distances, torsion angles and, in some cases, correlation of the anisotropic interactions. Since the availability of structurally important constraints in the solid state is limited due to lack of sufficient spectral resolution, the accuracy of the measured constraints become vital in studies relating the three-dimensional structure of proteins to its biological functions. Consequently, the theoretical methods employed to quantify the experimental data become important. To accentuate this aspect, we re-examine analytical two-spin models currently employed in the estimation of 13C-13C distances based on the rotational resonance (R 2) phenomenon. Although the error bars for the estimated distances tend to be in the range 0.5-1.0 Å, R 2 experiments are routinely employed in a variety of systems ranging from simple peptides to more complex amyloidogenic proteins. In this article we address this aspect by highlighting the systematic errors introduced by analytical models employing phenomenological damping terms to describe multi-spin effects. Specifically, the spin dynamics in R 2 experiments is described using Floquet theory employing two different operator formalisms. The systematic errors introduced by the phenomenological damping terms and their limitations are elucidated in two analytical models and analysed by comparing the results with rigorous numerical simulations.

  16. Simple Numerical Modelling for Gasdynamic Design of Wave Rotors

    NASA Astrophysics Data System (ADS)

    Okamoto, Koji; Nagashima, Toshio

    The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.

  17. Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane

    DOE PAGES

    de Almeida, Valmor F.; Hart, Kevin J.

    2017-01-03

    A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less

  18. Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F.; Hart, Kevin J.

    A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less

  19. Description and comparison of selected models for hydrologic analysis of ground-water flow, St Joseph River basin, Indiana

    USGS Publications Warehouse

    Peters, J.G.

    1987-01-01

    The Indiana Department of Natural Resources (IDNR) is developing water-management policies designed to assess the effects of irrigation and other water uses on water supply in the basin. In support of this effort, the USGS, in cooperation with IDNR, began a study to evaluate appropriate methods for analyzing the effects of pumping on ground-water levels and streamflow in the basin 's glacial aquifer systems. Four analytical models describe drawdown for a nonleaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and partially penetrating well; and an unconfined aquifer and partially penetrating well. Analytical equations, simplifying assumptions, and methods of application are described for each model. In addition to these four models, several other analytical models were used to predict the effects of ground-water pumping on water levels in the aquifer and on streamflow in local areas with up to two pumping wells. Analytical models for a variety of other hydrogeologic conditions are cited. A digital ground-water flow model was used to describe how a numerical model can be applied to a glacial aquifer system. The numerical model was used to predict the effects of six pumping plans in 46.5 sq mi area with as many as 150 wells. Water budgets for the six pumping plans were used to estimate the effect of pumping on streamflow reduction. Results of the analytical and numerical models indicate that, in general, the glacial aquifers in the basin are highly permeable. Radial hydraulic conductivity calculated by the analytical models ranged from 280 to 600 ft/day, compared to 210 and 360 ft/day used in the numerical model. Maximum seasonal pumping for irrigation produced maximum calculated drawdown of only one-fourth of available drawdown and reduced streamflow by as much as 21%. Analytical models are useful in estimating aquifer properties and predicting local effects of pumping in areas with simple lithology and boundary conditions and with few pumping wells. Numerical models are useful in regional areas with complex hydrogeology with many pumping wells and provide detailed water budgets useful for estimating the sources of water in pumping simulations. Numerical models are useful in constructing flow nets. The choice of which type of model to use is also based on the nature and scope of questions to be answered and on the degree of accuracy required. (Author 's abstract)

  20. The application of an atomistic J-integral to a ductile crack.

    PubMed

    Zimmerman, Jonathan A; Jones, Reese E

    2013-04-17

    In this work we apply a Lagrangian kernel-based estimator of continuum fields to atomic data to estimate the J-integral for the emission dislocations from a crack tip. Face-centered cubic (fcc) gold and body-centered cubic (bcc) iron modeled with embedded atom method (EAM) potentials are used as example systems. The results of a single crack with a K-loading compare well to an analytical solution from anisotropic linear elastic fracture mechanics. We also discovered that in the post-emission of dislocations from the crack tip there is a loop size-dependent contribution to the J-integral. For a system with a finite width crack loaded in simple tension, the finite size effects for the systems that were feasible to compute prevented precise agreement with theory. However, our results indicate that there is a trend towards convergence.

  1. DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROTECTION

    EPA Science Inventory

    A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...

  2. Analytical Tools in School Finance Reform.

    ERIC Educational Resources Information Center

    Johns, R. L.

    This paper discusses the problem of analyzing variations in the educational opportunities provided by different school districts and describes how to assess the impact of school finance alternatives through use of various analytical tools. The author first examines relatively simple analytical methods, including calculation of per-pupil…

  3. Horizontal lifelines - review of regulations and simple design method considering anchorage rigidity.

    PubMed

    Galy, Bertrand; Lan, André

    2018-03-01

    Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.

  4. An objective analysis of the dynamic nature of field capacity

    NASA Astrophysics Data System (ADS)

    Twarakavi, Navin K. C.; Sakai, Masaru; Å Imå¯Nek, Jirka

    2009-10-01

    Field capacity is one of the most commonly used, and yet poorly defined, soil hydraulic properties. Traditionally, field capacity has been defined as the amount of soil moisture after excess water has drained away and the rate of downward movement has materially decreased. Unfortunately, this qualitative definition does not lend itself to an unambiguous quantitative approach for estimation. Because of the vagueness in defining what constitutes "drainage of excess water" from a soil, the estimation of field capacity has often been based upon empirical guidelines. These empirical guidelines are either time, pressure, or flux based. In this paper, we developed a numerical approach to estimate field capacity using a flux-based definition. The resulting approach was implemented on the soil parameter data set used by Schaap et al. (2001), and the estimated field capacity was compared to traditional definitions of field capacity. The developed modeling approach was implemented using the HYDRUS-1D software with the capability of simultaneously estimating field capacity for multiple soils with soil hydraulic parameter data. The Richards equation was used in conjunction with the van Genuchten-Mualem model to simulate variably saturated flow in a soil. Using the modeling approach to estimate field capacity also resulted in additional information such as (1) the pressure head, at which field capacity is attained, and (2) the drainage time needed to reach field capacity from saturated conditions under nonevaporative conditions. We analyzed the applicability of the modeling-based approach to estimate field capacity on real-world soils data. We also used the developed method to create contour diagrams showing the variation of field capacity with texture. It was found that using benchmark pressure heads to estimate field capacity from the retention curve leads to inaccurate results. Finally, a simple analytical equation was developed to predict field capacity from soil hydraulic parameter information. The analytical equation was found to be effective in its ability to predict field capacities.

  5. An analytical formula for the longitudinal resonance frequencies of a fluid-filled crack

    NASA Astrophysics Data System (ADS)

    Maeda, Y.; Kumagai, H.

    2013-12-01

    The fluid-filled crack model (Chouet, 1986, JGR) simulates the resonances of a rectangular crack filled with an inviscid fluid embedded in a homogeneous isotropic elastic medium. The model demonstrates the existence of a slow wave, known as the crack wave, that propagates along the solid-fluid interfaces. The wave velocity depends on the crack stiffness. The model has been used to interpret the peak frequencies of long-period (LP) and very long period (VLP) seismic events at various volcanoes (Chouet and Matoza, 2013, JVGR). Up to now, crack model simulations have been performed using the finite difference (Chouet, 1986) and boundary integral (Yamamoto and Kawakatsu, 2008, GJI) methods. These methods require computationally extensive procedures to estimate the complex frequencies of crack resonance modes. Establishing an easier way to calculate the frequencies of crack resonances would help understanding of the observed frequencies. In this presentation, we propose a simple analytical formula for the longitudinal resonance frequencies of a fluid-filled crack. We first evaluated the analytical expression proposed by Kumagai (2009, Encyc. Complex. Sys. Sci.) through a comparison of the expression with the peak frequencies computed by a 2D version of the FDM code of Chouet (1986). Our comparison revealed that the equation of Kumagai (2009) shows discrepancies with the resonant frequencies computed by the FDM. We then modified the formula as fmL = (m-1)a/[2L(1+2ɛmLC)1/2], (1) where L is the crack length, a is the velocity of sound in the fluid, C is the crack stiffness, m is a positive integer defined such that the wavelength of the normal displacement on the crack surface is 2L/m, and ɛmL is a constant that depends on the longitudinal resonance modes. Excellent fits were obtained between the peak frequencies calculated by the FDM and by Eq. (1), suggesting that this equation is suitable for the resonant frequencies. We also performed 3D FDM computations of the longitudinal mode resonances. The peak frequencies computed by the FDM are well fitted by Eq. (1). The best-fit ɛmL values are different from those for 2D and depend on W/L, where W is the crack width. Eq. (1) shows that fmL is a simple analytical function of a/L and C given m and W/L. This enables simple and rapid interpretations of the source processes of LP events, including estimation of the fluid properties and crack geometries as well as identification of the resonance modes of the individual peak frequencies. LP events at volcanoes often exhibit peak frequency variations. In such cases, the frequency variations can be easily converted to variations in the fluid properties and crack geometries. We showed that Eq. (1) is consistent with the analytical solution for an infinite crack given by Ferrazzini and Aki (1987, JGR). Although a theoretical derivation of Eq. (1) was not obtained yet, Eq. (1) is consistent with the frequencies expected from the wavelengths of the fluid pressure variation.

  6. Methodological considerations in using complex survey data: an applied example with the Head Start Family and Child Experiences Survey.

    PubMed

    Hahs-Vaughn, Debbie L; McWayne, Christine M; Bulotsky-Shearer, Rebecca J; Wen, Xiaoli; Faria, Ann-Marie

    2011-06-01

    Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child Experiences Survey (FACES; 1997 and 2000 cohorts), three diverse multilevel models are presented that illustrate differences in results depending on addressing or ignoring the complex sampling issues. Limitations of using complex survey data are reported, along with recommendations for reporting complex sample results. © The Author(s) 2011

  7. Quantitative estimation of minimum offset for multichannel surface-wave survey with actively exciting source

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2006-01-01

    Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.

  8. Raman spectroscopic investigation of thorium dioxide-uranium dioxide (ThO₂-UO₂) fuel materials.

    PubMed

    Rao, Rekha; Bhagat, R K; Salke, Nilesh P; Kumar, Arun

    2014-01-01

    Raman spectroscopic investigations were carried out on proposed nuclear fuel thorium dioxide-uranium dioxide (ThO2-UO2) solid solutions and simulated fuels based on ThO2-UO2. Raman spectra of ThO2-UO2 solid solutions exhibited two-mode behavior in the entire composition range. Variations in mode frequencies and relative intensities of Raman modes enabled estimation of composition, defects, and oxygen stoichiometry in these compounds that are essential for their application. The present study shows that Raman spectroscopy is a simple, promising analytical tool for nondestructive characterization of this important class of nuclear fuel materials.

  9. Estimates of Helicobacter pylori densities in the gastric mucus layer by PCR, histologic examination, and CLOtest.

    PubMed

    Nowak, J A; Forouzandeh, B; Nowak, J A

    1997-09-01

    Helicobacter pylori inhabits the gastric mucus layer of infected persons. A number of investigators have reported the feasibility of detecting H pylori in gastric mucus with polymerase chain reaction (PCR)-based methods. We have established the sensitivity of a simple PCR assay for detecting H pylori in gastric mucus samples and estimate that the density of H pylori organisms in the gastric mucus of untreated patients is approximately 107 to 108 organisms per milliliter. We have similarly estimated the analytic sensitivities of histologic examination and the CLOtest (TRI-MED Specialties, Overland Park, Kan) for detecting H pylori and calculate similar values for the numbers of organisms in the gastric mucus layer. Our data indicate that gastric mucus is a suitable specimen for the detection of H pylori in infected patients, and that PCR-based assays of gastric mucus are significantly more sensitive than histologic testing or the CLOtest for demonstration of H pylori infection.

  10. Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.

    PubMed

    Hussain, S A; Perrier, M; Tartakovsky, B

    2018-04-01

    Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.

  11. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  12. DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROJECTION - PROJECT SUMMARY

    EPA Science Inventory

    A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...

  13. Analytical determination of space station response to crew motion and design of suspension system for microgravity experiments

    NASA Technical Reports Server (NTRS)

    Liu, F. C.

    1986-01-01

    The objective of this investigation is to make analytical determination of the acceleration produced by crew motion in an orbiting space station and define design parameters for the suspension system of microgravity experiments. A simple structural model for simulation of the IOC space station is proposed. Mathematical formulation of this model provides the engineers a simple and direct tool for designing an effective suspension system.

  14. An Analytical State Transition Matrix for Orbits Perturbed by an Oblate Spheroid

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An analytical state transition matrix and its inverse, which include the short period and secular effects of the second zonal harmonic, were developed from the nonsingular PS satellite theory. The fact that the independent variable in the PS theory is not time is in no respect disadvantageous, since any explicit analytical solution must be expressed in the true or eccentric anomaly. This is shown to be the case for the simple conic matrix. The PS theory allows for a concise, accurate, and algorithmically simple state transition matrix. The improvement over the conic matrix ranges from 2 to 4 digits accuracy.

  15. How Much Can We Learn from a Single Chromatographic Experiment? A Bayesian Perspective.

    PubMed

    Wiczling, Paweł; Kaliszan, Roman

    2016-01-05

    In this work, we proposed and investigated a Bayesian inference procedure to find the desired chromatographic conditions based on known analyte properties (lipophilicity, pKa, and polar surface area) using one preliminary experiment. A previously developed nonlinear mixed effect model was used to specify the prior information about a new analyte with known physicochemical properties. Further, the prior (no preliminary data) and posterior predictive distribution (prior + one experiment) were determined sequentially to search towards the desired separation. The following isocratic high-performance reversed-phase liquid chromatographic conditions were sought: (1) retention time of a single analyte within the range of 4-6 min and (2) baseline separation of two analytes with retention times within the range of 4-10 min. The empirical posterior Bayesian distribution of parameters was estimated using the "slice sampling" Markov Chain Monte Carlo (MCMC) algorithm implemented in Matlab. The simulations with artificial analytes and experimental data of ketoprofen and papaverine were used to test the proposed methodology. The simulation experiment showed that for a single and two randomly selected analytes, there is 97% and 74% probability of obtaining a successful chromatogram using none or one preliminary experiment. The desired separation for ketoprofen and papaverine was established based on a single experiment. It was confirmed that the search for a desired separation rarely requires a large number of chromatographic analyses at least for a simple optimization problem. The proposed Bayesian-based optimization scheme is a powerful method of finding a desired chromatographic separation based on a small number of preliminary experiments.

  16. Numerical modeling of simultaneous tracer release and piscicide treatment for invasive species control in the Chicago Sanitary and Ship Canal, Chicago, Illinois

    USGS Publications Warehouse

    Zhu, Zhenduo; Motta, Davide; Jackson, P. Ryan; Garcia, Marcelo H.

    2017-01-01

    In December 2009, during a piscicide treatment targeting the invasive Asian carp in the Chicago Sanitary and Ship Canal, Rhodamine WT dye was released to track and document the transport and dispersion of the piscicide. In this study, two modeling approaches are presented to reproduce the advection and dispersion of the dye tracer (and piscicide), a one-dimensional analytical solution and a three-dimensional numerical model. The two approaches were compared with field measurements of concentration and their applicability is discussed. Acoustic Doppler current profiler measurements were used to estimate the longitudinal dispersion coefficients at ten cross sections, which were taken as reference for calibrating the longitudinal dispersion coefficient in the one-dimensional analytical solution. While the analytical solution is fast, relatively simple, and can fairly accurately predict the core of the observed concentration time series at points downstream, it does not capture the tail of the breakthrough curves. These tails are well reproduced by the three-dimensional model, because it accounts for the effects of dead zones and a power plant which withdraws nearly 80 % of the water from the canal for cooling purposes before returning it back to the canal.

  17. Role of partial miscibility on pressure buildup due to constant rate injection of CO2 into closed and open brine aquifers

    NASA Astrophysics Data System (ADS)

    Mathias, Simon A.; Gluyas, Jon G.; GonzáLez MartíNez de Miguel, Gerardo J.; Hosseini, Seyyed A.

    2011-12-01

    This work extends an existing analytical solution for pressure buildup because of CO2 injection in brine aquifers by incorporating effects associated with partial miscibility. These include evaporation of water into the CO2 rich phase and dissolution of CO2 into brine and salt precipitation. The resulting equations are closed-form, including the locations of the associated leading and trailing shock fronts. Derivation of the analytical solution involves making a number of simplifying assumptions including: vertical pressure equilibrium, negligible capillary pressure, and constant fluid properties. The analytical solution is compared to results from TOUGH2 and found to accurately approximate the extent of the dry-out zone around the well, the resulting permeability enhancement due to residual brine evaporation, the volumetric saturation of precipitated salt, and the vertically averaged pressure distribution in both space and time for the four scenarios studied. While brine evaporation is found to have a considerable effect on pressure, the effect of CO2 dissolution is found to be small. The resulting equations remain simple to evaluate in spreadsheet software and represent a significant improvement on current methods for estimating pressure-limited CO2 storage capacity.

  18. WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL (EPA/600/SR-94/210)

    EPA Science Inventory

    A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a groundwater flo...

  19. Offline solid-phase extraction for preconcentration of pharmaceuticals and personal care products in environmental water and their simultaneous determination using the reversed phase high-performance liquid chromatography method.

    PubMed

    G Archana; Dhodapkar, Rita; Kumar, Anupama

    2016-09-01

    The present study reports a precise and simple offline solid-phase extraction (SPE) coupled with reversed-phase high-performance liquid chromatography (RP-HPLC) method for the simultaneous determination of five representative and commonly present pharmaceuticals and personal care products (PPCPs), a new class of emerging pollutants in the aquatic environment. The target list of analytes including ciprofloxacin, acetaminophen, caffeine benzophenone and irgasan were separated by a simple HPLC method. The column used was a reversed-phase C18 column, and the mobile phase was 1 % acetic acid and methanol (20:80 v/v) under isocratic conditions, at a flow rate of 1 mL min(-1). The analytes were separated and detected within 15 min using the photodiode array detector (PDA). The linearity of the calibration curves were obtained with correlation coefficients 0.98-0.99.The limit of detection (LOD), limit of quantification (LOQ), precision, accuracy and ruggedness demonstrated the reproducibility, specificity and sensitivity of the developed method. Prior to the analysis, the SPE was performed using a C18 cartridge to preconcentrate the targeted analytes from the environmental water samples. The developed method was applied to evaluate and fingerprint PPCPs in sewage collected from a residential engineering college campus, polluted water bodies such as Nag river and Pili river and the influent and effluent samples from a sewage treatment plant (STP) situated at Nagpur city, in the peak summer season. This method is useful for estimation of pollutants present in microquantities in the surface water bodies and treated sewage as compared to nanolevel pollutants detected by mass spectrometry (MS) detectors.

  20. Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peskin, M

    2004-04-22

    I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.

  1. Comparison of Gluten Extraction Protocols Assessed by LC-MS/MS Analysis.

    PubMed

    Fallahbaghery, Azadeh; Zou, Wei; Byrne, Keren; Howitt, Crispin A; Colgrave, Michelle L

    2017-04-05

    The efficiency of gluten extraction is of critical importance to the results derived from any analytical method for gluten detection and quantitation, whether it employs reagent-based technology (antibodies) or analytical instrumentation (mass spectrometry). If the target proteins are not efficiently extracted, the end result will be an under-estimation in the gluten content posing a health risk to people affected by conditions such as celiac disease (CD) and nonceliac gluten sensitivity (NCGS). Five different extraction protocols were investigated using LC-MRM-MS for their ability to efficiently and reproducibly extract gluten. The rapid and simple "IPA/DTT" protocol and related "two-step" protocol were enriched for gluten proteins, 55/86% (trypsin/chymotrypsin) and 41/68% of all protein identifications, respectively, with both methods showing high reproducibility (CV < 15%). When using multistep protocols, it was critical to examine all fractions, as coextraction of proteins occurred across fractions, with significant levels of proteins existing in unexpected fractions and not all proteins within a particular gluten class behaving the same.

  2. Determination of nicotine by surface-enhanced Raman scattering (SERS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barber, T.E.; List, M.S.; Haas, J.W. III

    1994-11-01

    The analytical application of surface-enhanced Raman spectroscopy (SERS) to the determination of nicotine is demonstrated. A simple spectroelectrochemical method using a copper or silver electrode as the SERS substrate has been developed, consisting of three steps: polishing a working electrode to a mirror finish; roughening the electrode in an electrolyte solution; and, finally, depositing the nicotine analyte onto the roughened electrode after immersion in a sample solution. During the reduction cycle, a large enhancement in nicotine Raman scattering is observed at the electrode surface. The intensity of the SERS signal on a silver electrode is linear with concentration from 10more » to 900 ppb, with an estimated detection limit of 7 ppb. The total analysis time per sample is approximately five minutes. This procedure has been used to analyze the extract from a cigarette side-stream smoke sample (environmental tobacco smoke); the SERS results agree well with those of conventional gas chromatographic analysis.« less

  3. Time constant determination for electrical equivalent of biological cells

    NASA Astrophysics Data System (ADS)

    Dubey, Ashutosh Kumar; Dutta-Gupta, Shourya; Kumar, Ravi; Tewari, Abhishek; Basu, Bikramjit

    2009-04-01

    The electric field interactions with biological cells are of significant interest in various biophysical and biomedical applications. In order to study such important aspect, it is necessary to evaluate the time constant in order to estimate the response time of living cells in the electric field (E-field). In the present study, the time constant is evaluated by considering the hypothesis of electrical analog of spherical shaped cells and assuming realistic values for capacitance and resistivity properties of cell/nuclear membrane, cytoplasm, and nucleus. In addition, the resistance of cytoplasm and nucleoplasm was computed based on simple geometrical considerations. Importantly, the analysis on the basis of first principles shows that the average values of time constant would be around 2-3 μs, assuming the theoretical capacitance values and the analytically computed resistance values. The implication of our analytical solution has been discussed in reference to the cellular adaptation processes such as atrophy/hypertrophy as well as the variation in electrical transport properties of cellular membrane/cytoplasm/nuclear membrane/nucleoplasm.

  4. Estimating true evolutionary distances under the DCJ model.

    PubMed

    Lin, Yu; Moret, Bernard M E

    2008-07-01

    Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.

  5. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  6. Analytical Phase Equilibrium Function for Mixtures Obeying Raoult's and Henry's Laws

    NASA Astrophysics Data System (ADS)

    Hayes, Robert

    When a mixture of two substances exists in both the liquid and gas phase at equilibrium, Raoults and Henry's laws (ideal solution and ideal dilute solution approximations) can be used to estimate the gas and liquid mole fractions at the extremes of either very little solute or solvent. By assuming that a cubic polynomial can reasonably approximate the intermediate values to these extremes as a function of mole fraction, the cubic polynomial is solved and presented. A closed form equation approximating the pressure dependence on mole fraction of the constituents is thereby obtained. As a first approximation, this is a very simple and potentially useful means to estimate gas and liquid mole fractions of equilibrium mixtures. Mixtures with an azeotrope require additional attention if this type of approach is to be utilized. This work supported in part by federal Grant NRC-HQ-84-14-G-0059.

  7. Dipolar filtered magic-sandwich-echoes as a tool for probing molecular motions using time domain NMR

    NASA Astrophysics Data System (ADS)

    Filgueiras, Jefferson G.; da Silva, Uilson B.; Paro, Giovanni; d'Eurydice, Marcel N.; Cobo, Márcio F.; deAzevedo, Eduardo R.

    2017-12-01

    We present a simple 1 H NMR approach for characterizing intermediate to fast regime molecular motions using 1 H time-domain NMR at low magnetic field. The method is based on a Goldmann Shen dipolar filter (DF) followed by a Mixed Magic Sandwich Echo (MSE). The dipolar filter suppresses the signals arising from molecular segments presenting sub kHz mobility, so only signals from mobile segments are detected. Thus, the temperature dependence of the signal intensities directly evidences the onset of molecular motions with rates higher than kHz. The DF-MSE signal intensity is described by an analytical function based on the Anderson Weiss theory, from where parameters related to the molecular motion (e.g. correlation times and activation energy) can be estimated when performing experiments as function of the temperature. Furthermore, we propose the use of the Tikhonov regularization for estimating the width of the distribution of correlation times.

  8. On the CCN (de)activation nonlinearities

    NASA Astrophysics Data System (ADS)

    Arabas, Sylwester; Shima, Shin-ichiro

    2017-09-01

    We take into consideration the evolution of particle size in a monodisperse aerosol population during activation and deactivation of cloud condensation nuclei (CCN). Our analysis reveals that the system undergoes a saddle-node bifurcation and a cusp catastrophe. The control parameters chosen for the analysis are the relative humidity and the particle concentration. An analytical estimate of the activation timescale is derived through estimation of the time spent in the saddle-node bifurcation bottleneck. Numerical integration of the system coupled with a simple air-parcel cloud model portrays two types of activation/deactivation hystereses: one associated with the kinetic limitations on droplet growth when the system is far from equilibrium, and one occurring close to equilibrium and associated with the cusp catastrophe. We discuss the presented analyses in context of the development of particle-based models of aerosol-cloud interactions in which activation and deactivation impose stringent time-resolution constraints on numerical integration.

  9. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  10. Polyamide as an efficient sorbent for simultaneous interface-free determination of three Sudan dyes in saffron and urine using high-performance liquid chromatography-ultra violet detection.

    PubMed

    Saeidi, Iman; Barfi, Behruz; Payrovi, Moazameh; Feizy, Javid; Sheibani, Hojat A; Miri, Mina; Ghollasi Moud, Farahnaz

    2015-01-01

    With polyamide (PA) as an efficient sorbent for solid phase extraction (SPE) of Sudan dyes II, III and Red 7B from saffron and urine, their determination by HPLC was performed. The optimum conditions for SPE were achieved using 7 mL methanol/water (1:9, v/v, pH 7) as the washing solvent and 3 mL tetrahydrofuran for elution. Good clean-up and high (above 90%) recoveries were observed for all the analytes. The optimized mobile phase composition for HPLC analysis of these compounds was methanol-water (70:30, v/v). The SPE parameters, such as the maximum loading capacity and breakthrough volume, were also determined for each analyte. The limits of detection (LODs), limits of quantification (LOQs), linear ranges and recoveries for the analytes were 4.6-6.6 microg/L, 13.0-19.8 microg/L, 13.0-5000 microg/L (r2>0.99) and 92.5%-113.4%, respectively. The precisions (RSDs) of the overall analytical procedure, estimated by five replicate measurements for Sudan II, III and Red 7B in saffron and urine samples were 2.3%, 1.8% and 3.6%, respectively. The developed method is simple and successful in the application to the determination of Sudan dyes in saffron and urine samples with HPLC coupled with UV detection.

  11. Analytic H I-to-H2 Photodissociation Transition Profiles

    NASA Astrophysics Data System (ADS)

    Bialy, Shmuel; Sternberg, Amiel

    2016-05-01

    We present a simple analytic procedure for generating atomic (H I) to molecular ({{{H}}}2) density profiles for optically thick hydrogen gas clouds illuminated by far-ultraviolet radiation fields. Our procedure is based on the analytic theory for the structure of one-dimensional H I/{{{H}}}2 photon-dominated regions, presented by Sternberg et al. Depth-dependent atomic and molecular density fractions may be computed for arbitrary gas density, far-ultraviolet field intensity, and the metallicity-dependent H2 formation rate coefficient, and dust absorption cross section in the Lyman-Werner photodissociation band. We use our procedure to generate a set of {{H}} {{I}}{-}{to}{-}{{{H}}}2 transition profiles for a wide range of conditions, from the weak- to strong-field limits, and from super-solar down to low metallicities. We show that if presented as functions of dust optical depth, the {{H}} {{I}} and {{{H}}}2 density profiles depend primarily on the Sternberg “α G parameter” (dimensionless) that determines the dust optical depth associated with the total photodissociated {{H}} {{I}} column. We derive a universal analytic formula for the {{H}} {{I}}{-}{to}{-}{{{H}}}2 transition points as a function of just α G. Our formula will be useful for interpreting emission-line observations of H I/{{{H}}}2 interfaces, for estimating star formation thresholds, and for sub-grid components in hydrodynamics simulations.

  12. Test of a potential link between analytic and nonanalytic category learning and automatic, effortful processing.

    PubMed

    Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J

    2001-08-01

    The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.

  13. The Role of Nanoparticle Design in Determining Analytical Performance of Lateral Flow Immunoassays.

    PubMed

    Zhan, Li; Guo, Shuang-Zhuang; Song, Fayi; Gong, Yan; Xu, Feng; Boulware, David R; McAlpine, Michael C; Chan, Warren C W; Bischof, John C

    2017-12-13

    Rapid, simple, and cost-effective diagnostics are needed to improve healthcare at the point of care (POC). However, the most widely used POC diagnostic, the lateral flow immunoassay (LFA), is ∼1000-times less sensitive and has a smaller analytical range than laboratory tests, requiring a confirmatory test to establish truly negative results. Here, a rational and systematic strategy is used to design the LFA contrast label (i.e., gold nanoparticles) to improve the analytical sensitivity, analytical detection range, and antigen quantification of LFAs. Specifically, we discovered that the size (30, 60, or 100 nm) of the gold nanoparticles is a main contributor to the LFA analytical performance through both the degree of receptor interaction and the ultimate visual or thermal contrast signals. Using the optimal LFA design, we demonstrated the ability to improve the analytical sensitivity by 256-fold and expand the analytical detection range from 3 log 10 to 6 log 10 for diagnosing patients with inflammatory conditions by measuring C-reactive protein. This work demonstrates that, with appropriate design of the contrast label, a simple and commonly used diagnostic technology can compete with more expensive state-of-the-art laboratory tests.

  14. Tungsten devices in analytical atomic spectrometry

    NASA Astrophysics Data System (ADS)

    Hou, Xiandeng; Jones, Bradley T.

    2002-04-01

    Tungsten devices have been employed in analytical atomic spectrometry for approximately 30 years. Most of these atomizers can be electrically heated up to 3000 °C at very high heating rates, with a simple power supply. Usually, a tungsten device is employed in one of two modes: as an electrothermal atomizer with which the sample vapor is probed directly, or as an electrothermal vaporizer, which produces a sample aerosol that is then carried to a separate atomizer for analysis. Tungsten devices may take various physical shapes: tubes, cups, boats, ribbons, wires, filaments, coils and loops. Most of these orientations have been applied to many analytical techniques, such as atomic absorption spectrometry, atomic emission spectrometry, atomic fluorescence spectrometry, laser excited atomic fluorescence spectrometry, metastable transfer emission spectroscopy, inductively coupled plasma optical emission spectrometry, inductively coupled plasma mass spectrometry and microwave plasma atomic spectrometry. The analytical figures of merit and the practical applications reported for these techniques are reviewed. Atomization mechanisms reported for tungsten atomizers are also briefly summarized. In addition, less common applications of tungsten devices are discussed, including analyte preconcentration by adsorption or electrodeposition and electrothermal separation of analytes prior to analysis. Tungsten atomization devices continue to provide simple, versatile alternatives for analytical atomic spectrometry.

  15. Forgetfulness can help you win games.

    PubMed

    Burridge, James; Gao, Yu; Mao, Yong

    2015-09-01

    We present a simple game model where agents with different memory lengths compete for finite resources. We show by simulation and analytically that an instability exists at a critical memory length, and as a result, different memory lengths can compete and coexist in a dynamical equilibrium. Our analytical formulation makes a connection to statistical urn models, and we show that temperature is mirrored by the agent's memory. Our simple model of memory may be incorporated into other game models with implications that we briefly discuss.

  16. Simple functionalization method for single conical pores with a polydopamine layer

    NASA Astrophysics Data System (ADS)

    Horiguchi, Yukichi; Goda, Tatsuro; Miyahara, Yuji

    2018-04-01

    Resistive pulse sensing (RPS) is an interesting analytical system in which micro- to nanosized pores are used to evaluate particles or small analytes. Recently, molecular immobilization techniques to improve the performance of RPS have been reported. The problem in functionalization for RPS is that molecular immobilization by chemical reaction is restricted by the pore material type. Herein, a simple functionalization is performed using mussel-inspired polydopamine as an intermediate layer to connect the pore material with functional molecules.

  17. Adaptive behaviors in multi-agent source localization using passive sensing.

    PubMed

    Shaukat, Mansoor; Chitre, Mandar

    2016-12-01

    In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.

  18. Electric field tomography for contactless imaging of resistivity in biomedical applications.

    PubMed

    Korjenevsky, A V

    2004-02-01

    The technique of contactless imaging of resistivity distribution inside conductive objects, which can be applied in medical diagnostics, has been suggested and analyzed. The method exploits the interaction of a high-frequency electric field with a conductive medium. Unlike electrical impedance tomography, no electric current is injected into the medium from outside. The interaction is accompanied with excitation of high-frequency currents and redistribution of free charges inside the medium leading to strong and irregular perturbation of the field's magnitude outside and inside the object. Along with this the considered interaction also leads to small and regular phase shifts of the field in the area surrounding the object. Measuring these phase shifts using a set of electrodes placed around the object enables us to reconstruct the internal structure of the medium. The basics of this technique, which we name electric field tomography (EFT), are described, simple analytical estimations are made and requirements for measuring equipment are formulated. The realizability of the technique is verified by numerical simulations based on the finite elements method. Results of simulation have confirmed initial estimations and show that in the case of EFT even a comparatively simple filtered backprojection algorithm can be used for reconstructing the static resistivity distribution in biological tissues.

  19. Describing Site Amplification for Surface Waves in Realistic Basins

    NASA Astrophysics Data System (ADS)

    Bowden, D. C.; Tsai, V. C.

    2017-12-01

    Standard characterizations of site-specific site response assume a vertically-incident shear wave; given a 1D velocity profile, amplification and resonances can be calculated based on conservation of energy. A similar approach can be applied to surface waves, resulting in an estimate of amplification relative to a hard rock site that is different in terms of both amount of amplification and frequency. This prediction of surface-wave site amplification has been well validated through simple simulations, and in this presentation we explore the extent to which a 1D profile can explain observed amplifications in more realistic scenarios. Comparisons of various simple 2D and 3D simulations, for example, allow us to explore the effect of different basin shapes and the relative importance of effects such as focusing, conversion of wave-types and lateral surface wave resonances. Additionally, the 1D estimates for vertically-incident shear waves and for surface waves are compared to spectral ratios of historic events in deep sedimentary basins to demonstrate the appropriateness of the two different predictions. This difference in amplification responses between the wave types implies that a single measurement of site response, whether analytically calculated from 1D models or empirically observed, is insufficient for regions where surface waves play a strong role.

  20. Simple models of the hydrofracture process

    NASA Astrophysics Data System (ADS)

    Marder, M.; Chen, Chih-Hung; Patzek, T.

    2015-12-01

    Hydrofracturing to recover natural gas and oil relies on the creation of a fracture network with pressurized water. We analyze the creation of the network in two ways. First, we assemble a collection of analytical estimates for pressure-driven crack motion in simple geometries, including crack speed as a function of length, energy dissipated by fluid viscosity and used to break rock, and the conditions under which a second crack will initiate while a first is running. We develop a pseudo-three-dimensional numerical model that couples fluid motion with solid mechanics and can generate branching crack structures not specified in advance. One of our main conclusions is that the typical spacing between fractures must be on the order of a meter, and this conclusion arises in two separate ways. First, it arises from analysis of gas production rates, given the diffusion constants for gas in the rock. Second, it arises from the number of fractures that should be generated given the scale of the affected region and the amounts of water pumped into the rock.

  1. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    PubMed

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  2. Numerical Simulations of Mechanical Erosion from below by Creep on Rate-State Faults

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Rubin, A. M.

    2012-04-01

    The aim of this study is to increase our understanding of how earthquakes nucleate on frictionally-locked fault patches that are loaded by the growing stress concentrations at their boundaries due to aseismic creep. Such mechanical erosion from below of locked patches has previously been invoked by Gillard et al. (1996) to explain accelerating seismicity and increases in maximum earthquake magnitude on a strike-slip streak (a narrow ribbon of tightly clustered seismicity) in Kilauea's East rift, and it might also play a role in the loading of major locked strike-slip faults by creep from below the seismogenic zone. Gillard et al. (1996) provided simple analytical estimates of the size of and moment release within the eroding edge of the locked zone that matched the observed seismicity in Kilauea's East rift. However, an obvious, similar signal has not consistently been found before major strike-slip earthquakes. Here, we use simulations to determine to what extent the simple estimates by Gillard et al. survive a wider range of geometric configurations and slip histories. The boundary between the locked and creeping sections at the base of the seismogenic zone is modeled as a gradual, continuous transition between steady-state velocity-strengthening at greater depth to velocity-weakening surroundings at shallow depth, qualitatively consistent with laboratory estimates of the temperature dependence of (a-b). The goal is to expand the range of possible outcomes to broaden our range of expectations for the behavior of the eroding edge of the locked zones.

  3. Experimental Validation of Lightning-Induced Electromagnetic (Indirect) Coupling to Short Monopole Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crull, E W; Brown Jr., C G; Perkins, M P

    2008-07-30

    For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less

  4. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons.

    PubMed

    Bernardi, Davide; Lindner, Benjamin

    2017-06-30

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  5. Structure of the screening layer near a plane isolated body in the deep vacuum. Part 2. Monoenergetic isotropic flow

    NASA Astrophysics Data System (ADS)

    Gunko, Yuri F.; Gunko, Natalia A.

    2018-05-01

    In this paper we consider the problem of determining the structure of the electric field near the surface of a flat insulated body under conditions of a deep vacuum. It is assumed that the emitted particles are electrons leaving the body surface under the influence of ionizing radiation whose velocities distribution near the surface is isotropic. It is estimated the thickness of the screening layer under conditions of stationary emission from a flat surface. The solutio of the problem of determining a stationary self-consistent electric field near the surface is found in a simple analytical form. The thickness of the screening layer is calculated from this formula.

  6. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons

    NASA Astrophysics Data System (ADS)

    Bernardi, Davide; Lindner, Benjamin

    2017-06-01

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  7. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    PubMed

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  9. Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow.

    PubMed

    Bette, Henrik M; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael

    2017-01-01

    We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P(v=0) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P(v∈{0,1}) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.

  10. Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow

    NASA Astrophysics Data System (ADS)

    Bette, Henrik M.; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael

    2017-01-01

    We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P (v =0 ) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P (v ∈{0 ,1 }) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.

  11. Evolution of complex dynamics

    NASA Astrophysics Data System (ADS)

    Wilds, Roy; Kauffman, Stuart A.; Glass, Leon

    2008-09-01

    We study the evolution of complex dynamics in a model of a genetic regulatory network. The fitness is associated with the topological entropy in a class of piecewise linear equations, and the mutations are associated with changes in the logical structure of the network. We compare hill climbing evolution, in which only mutations that increase the fitness are allowed, with neutral evolution, in which mutations that leave the fitness unchanged are allowed. The simple structure of the fitness landscape enables us to estimate analytically the rates of hill climbing and neutral evolution. In this model, allowing neutral mutations accelerates the rate of evolutionary advancement for low mutation frequencies. These results are applicable to evolution in natural and technological systems.

  12. Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach

    NASA Astrophysics Data System (ADS)

    Reznichenko, A. V.; Terekhov, I. S.

    2018-04-01

    We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.

  13. Statistical methods for astronomical data with upper limits. I - Univariate distributions

    NASA Technical Reports Server (NTRS)

    Feigelson, E. D.; Nelson, P. I.

    1985-01-01

    The statistical treatment of univariate censored data is discussed. A heuristic derivation of the Kaplan-Meier maximum-likelihood estimator from first principles is presented which results in an expression amenable to analytic error analysis. Methods for comparing two or more censored samples are given along with simple computational examples, stressing the fact that most astronomical problems involve upper limits while the standard mathematical methods require lower limits. The application of univariate survival analysis to six data sets in the recent astrophysical literature is described, and various aspects of the use of survival analysis in astronomy, such as the limitations of various two-sample tests and the role of parametric modelling, are discussed.

  14. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  15. [Developments in preparation and experimental method of solid phase microextraction fibers].

    PubMed

    Yi, Xu; Fu, Yujie

    2004-09-01

    Solid phase microextraction (SPME) is a simple and effective adsorption and desorption technique, which concentrates volatile or nonvolatile compounds from liquid samples or headspace of samples. SPME is compatible with analyte separation and detection by gas chromatography, high performance liquid chromatography, and other instrumental methods. It can provide many advantages, such as wide linear scale, low solvent and sample consumption, short analytical times, low detection limits, simple apparatus, and so on. The theory of SPME is introduced, which includes equilibrium theory and non-equilibrium theory. The novel development of fiber preparation methods and relative experimental techniques are discussed. In addition to commercial fiber preparation, different newly developed fabrication techniques, such as sol-gel, electronic deposition, carbon-base adsorption, high-temperature epoxy immobilization, are presented. Effects of extraction modes, selection of fiber coating, optimization of operating conditions, method sensitivity and precision, and systematical automation, are taken into considerations in the analytical process of SPME. A simple perspective of SPME is proposed at last.

  16. Polyelectrolyte mediated nano hybrid particle as a nano-sensor with outstandingly amplified specificity and sensitivity for enzyme free estimation of cholesterol.

    PubMed

    Chebl, Mazhar; Moussa, Zeinab; Peurla, Markus; Patra, Digambara

    2017-07-01

    As a proof of concept, here it is established that curcumin integrated chitosan oligosaccharide lactate (COL) self-assembles on silica nanoparticle surface to form nano hybrid particles (NHPs). These NHPs have size in the ranges of 25-35nm with silica nanoparticle as its core and curcumin-COL as outer layer having thickness of 4-8nm. The fluorescence intensity of these NHPs are found to be quenched and emission maximum is ~50nm red shifted compared to free curcumin implying inner filter effect and/or homo-FRET between curcumin molecules present on the surface of individual nano hybrid particle. Although fluorescence of free curcumin is remarkably quenched by Hg 2+ /Cu 2+ ions due to chelation through keto-enol form, the fluorescence of NHPs is unaffected by Hg 2+ /Cu 2+ ion that boosts analytical selectivity. The fluorescence intensity is outstandingly enhanced in the presence of cholesterol but is not influenced by ascorbic acid, uric acid, glucose, albumin, lipid and other potential interfering substances that either obstruct during enzymatic reaction or affect fluorescence of free curcumin. Thus, NHPs outstandingly improve analytical specificity, selectivity and sensitivity during cholesterol estimation compared to free curcumin. The interaction between cholesterol and NHPs is found to be a combination of ground state electrostatic interaction through the free hydroxyl group of cholesterol along with hydrophobic interaction between NHPs and cholesterol and excited state interaction. The proposed cholesterol biosensor illustrates a wider linear dynamic range, 0.002-10mmolL -1 , (upper limit is due to lack of solubility of cholesterol) needed for biomedical application and better than reported values during enzymatic reaction. In addition, the NHPs are found to be photo-stable potentially making it suitable for simple, quick and cost-effective cholesterol estimation and opening an alternative approach other than enzymatic reaction using nano hybrid structure to tune analytical specificity, selectivity and sensitivity of probe molecule. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Vitamin K

    USDA-ARS?s Scientific Manuscript database

    A wide range of analytical techniques are available for the detection, quantitation, and evaluation of vitamin K in foods. The methods vary from simple to complex depending on extraction, separation, identification and detection of the analyte. Among the extraction methods applied for vitamin K anal...

  18. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  19. A simple framework for relating variations in runoff to variations in climatic conditions and catchment properties

    NASA Astrophysics Data System (ADS)

    Roderick, Michael L.; Farquhar, Graham D.

    2011-12-01

    We use the Budyko framework to calculate catchment-scale evapotranspiration (E) and runoff (Q) as a function of two climatic factors, precipitation (P) and evaporative demand (Eo = 0.75 times the pan evaporation rate), and a third parameter that encodes the catchment properties (n) and modifies how P is partitioned between E and Q. This simple theory accurately predicted the long-term evapotranspiration (E) and runoff (Q) for the Murray-Darling Basin (MDB) in southeast Australia. We extend the theory by developing a simple and novel analytical expression for the effects on E and Q of small perturbations in P, Eo, and n. The theory predicts that a 10% change in P, with all else constant, would result in a 26% change in Q in the MDB. Future climate scenarios (2070-2099) derived using Intergovernmental Panel on Climate Change AR4 climate model output highlight the diversity of projections for P (±30%) with a correspondingly large range in projections for Q (±80%) in the MDB. We conclude with a qualitative description about the impact of changes in catchment properties on water availability and focus on the interaction between vegetation change, increasing atmospheric [CO2], and fire frequency. We conclude that the modern version of the Budyko framework is a useful tool for making simple and transparent estimates of changes in water availability.

  20. Development of an analytical solution for the Budyko watershed parameter in terms of catchment physical features

    NASA Astrophysics Data System (ADS)

    Reaver, N.; Kaplan, D. A.; Jawitz, J. W.

    2017-12-01

    The Budyko hypothesis states that a catchment's long-term water and energy balances are dependent on two relatively easy to measure quantities: rainfall depth and potential evaporation. This hypothesis is expressed as a simple function, the Budyko equation, which allows for the prediction of a catchment's actual evapotranspiration and discharge from measured rainfall depth and potential evaporation, data which are widely available. However, the two main analytically derived forms of the Budyko equation contain a single unknown watershed parameter, whose value varies across catchments; variation in this parameter has been used to explain the hydrological behavior of different catchments. The watershed parameter is generally thought of as a lumped quantity that represents the influence of all catchment biophysical features (e.g. soil type and depth, vegetation type, timing of rainfall, etc). Previous work has shown that the parameter is statistically correlated with catchment properties, but an explicit expression has been elusive. While the watershed parameter can be determined empirically by fitting the Budyko equation to measured data in gauged catchments where actual evapotranspiration can be estimated, this limits the utility of the framework for predicting impacts to catchment hydrology due to changing climate and land use. In this study, we developed an analytical solution for the lumped catchment parameter for both forms of the Budyko equation. We combined these solutions with a statistical soil moisture model to obtain analytical solutions for the Budyko equation parameter as a function of measurable catchment physical features, including rooting depth, soil porosity, and soil wilting point. We tested the predictive power of these solutions using the U.S. catchments in the MOPEX database. We also compared the Budyko equation parameter estimates generated from our analytical solutions (i.e. predicted parameters) with those obtained through the calibration of the Budyko equation to discharge data (i.e. empirical parameters), and found good agreement. These results suggest that it is possible to predict the Budyko equation watershed parameter directly from physical features, even for ungauged catchments.

  1. Robotic voltammetry with carbon nanotube-based sensors: a superb blend for convenient high-quality antimicrobial trace analysis.

    PubMed

    Theanponkrang, Somjai; Suginta, Wipa; Weingart, Helge; Winterhalter, Mathias; Schulte, Albert

    2015-01-01

    A new automated pharmacoanalytical technique for convenient quantification of redox-active antibiotics has been established by combining the benefits of a carbon nanotube (CNT) sensor modification with electrocatalytic activity for analyte detection with the merits of a robotic electrochemical device that is capable of sequential nonmanual sample measurements in 24-well microtiter plates. Norfloxacin (NFX) and ciprofloxacin (CFX), two standard fluoroquinolone antibiotics, were used in automated calibration measurements by differential pulse voltammetry (DPV) and accomplished were linear ranges of 1-10 μM and 2-100 μM for NFX and CFX, respectively. The lowest detectable levels were estimated to be 0.3±0.1 μM (n=7) for NFX and 1.6±0.1 μM (n=7) for CFX. In standard solutions or tablet samples of known content, both analytes could be quantified with the robotic DPV microtiter plate assay, with recoveries within ±4% of 100%. And recoveries were as good when NFX was evaluated in human serum samples with added NFX. The use of simple instrumentation, convenience in execution, and high effectiveness in analyte quantitation suggest the merger between automated microtiter plate voltammetry and CNT-supported electrochemical drug detection as a novel methodology for antibiotic testing in pharmaceutical and clinical research and quality control laboratories.

  2. Monitoring of Cr, Cu, Pb, V and Zn in polluted soils by laser induced breakdown spectroscopy (LIBS).

    PubMed

    Dell'Aglio, Marcella; Gaudiuso, Rosalba; Senesi, Giorgio S; De Giacomo, Alessandro; Zaccone, Claudio; Miano, Teodoro M; De Pascale, Olga

    2011-05-01

    Laser Induced Breakdown Spectroscopy (LIBS) is a fast and multi-elemental analytical technique particularly suitable for the qualitative and quantitative analysis of heavy metals in solid samples, including environmental ones. Although LIBS is often recognised in the literature as a well-established analytical technique, results about quantitative analysis of elements in chemically complex matrices such as soils are quite contrasting. In this work, soil samples of various origins have been analyzed by LIBS and data compared to those obtained by Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES). The emission intensities of one selected line for each of the five analytes (i.e., Cr, Cu, Pb, V, and Zn) were normalized to the background signal, and plotted as a function of the concentration values previously determined by ICP-OES. Data showed a good linearity for all calibration lines drawn, and the correlation between ICP-OES and LIBS was confirmed by the satisfactory agreement obtained between the corresponding values. Consequently, LIBS method can be used at least for metal monitoring in soils. In this respect, a simple method for the estimation of the soil pollution degree by heavy metals, based on the determination of an anthropogenic index, was proposed and determined for Cr and Zn.

  3. Heuristic extraction of rules in pruned artificial neural networks models used for quantifying highly overlapping chromatographic peaks.

    PubMed

    Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva

    2004-01-01

    The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society

  4. Stress Analysis of Beams with Shear Deformation of the Flanges

    NASA Technical Reports Server (NTRS)

    Kuhn, Paul

    1937-01-01

    This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.

  5. Laser welding of polymers: phenomenological model for a quick and reliable process quality estimation considering beam shape influences

    NASA Astrophysics Data System (ADS)

    Timpe, Nathalie F.; Stuch, Julia; Scholl, Marcus; Russek, Ulrich A.

    2016-03-01

    This contribution presents a phenomenological, analytical model for laser welding of polymers which is suited for a quick process quality estimation for the practitioner. Besides material properties of the polymer and processing parameters like welding pressure, feed rate and laser power the model is based on a simple few parameter description of the size and shape of the laser power density distribution (PDD) in the processing zone. The model allows an estimation of the weld seam tensile strength. It is based on energy balance considerations within a thin sheet with the thickness of the optical penetration depth on the surface of the absorbing welding partner. The joining process itself is modelled by a phenomenological approach. The model reproduces the experimentally known process windows for the main process parameters correctly. Using the parameters describing the shape of the laser PDD the critical dependence of the process windows on the PDD shape will be predicted and compared with experiments. The adaption of the model to other laser manufacturing processes where the PDD influence can be modelled comparably will be discussed.

  6. Joint Bayesian Component Separation and CMB Power Spectrum Estimation

    NASA Technical Reports Server (NTRS)

    Eriksen, H. K.; Jewell, J. B.; Dickinson, C.; Banday, A. J.; Gorski, K. M.; Lawrence, C. R.

    2008-01-01

    We describe and implement an exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework. Two essential new features are (1) conditional sampling of foreground spectral parameters and (2) joint sampling of all amplitude-type degrees of freedom (e.g., CMB, foreground pixel amplitudes, and global template amplitudes) given spectral parameters. Given a parametric model of the foreground signals, we estimate efficiently and accurately the exact joint foreground- CMB posterior distribution and, therefore, all marginal distributions such as the CMB power spectrum or foreground spectral index posteriors. The main limitation of the current implementation is the requirement of identical beam responses at all frequencies, which restricts the analysis to the lowest resolution of a given experiment. We outline a future generalization to multiresolution observations. To verify the method, we analyze simple models and compare the results to analytical predictions. We then analyze a realistic simulation with properties similar to the 3 yr WMAP data, downgraded to a common resolution of 3 deg FWHM. The results from the actual 3 yr WMAP temperature analysis are presented in a companion Letter.

  7. Meta-analysis of multiple outcomes: a multilevel approach.

    PubMed

    Van den Noortgate, Wim; López-López, José Antonio; Marín-Martínez, Fulgencio; Sánchez-Meca, Julio

    2015-12-01

    In meta-analysis, dependent effect sizes are very common. An example is where in one or more studies the effect of an intervention is evaluated on multiple outcome variables for the same sample of participants. In this paper, we evaluate a three-level meta-analytic model to account for this kind of dependence, extending the simulation results of Van den Noortgate, López-López, Marín-Martínez, and Sánchez-Meca Behavior Research Methods, 45, 576-594 (2013) by allowing for a variation in the number of effect sizes per study, in the between-study variance, in the correlations between pairs of outcomes, and in the sample size of the studies. At the same time, we explore the performance of the approach if the outcomes used in a study can be regarded as a random sample from a population of outcomes. We conclude that although this approach is relatively simple and does not require prior estimates of the sampling covariances between effect sizes, it gives appropriate mean effect size estimates, standard error estimates, and confidence interval coverage proportions in a variety of realistic situations.

  8. A Backward-Lagrangian-Stochastic Footprint Model for the Urban Environment

    NASA Astrophysics Data System (ADS)

    Wang, Chenghao; Wang, Zhi-Hua; Yang, Jiachuan; Li, Qi

    2018-02-01

    Built terrains, with their complexity in morphology, high heterogeneity, and anthropogenic impact, impose substantial challenges in Earth-system modelling. In particular, estimation of the source areas and footprints of atmospheric measurements in cities requires realistic representation of the landscape characteristics and flow physics in urban areas, but has hitherto been heavily reliant on large-eddy simulations. In this study, we developed physical parametrization schemes for estimating urban footprints based on the backward-Lagrangian-stochastic algorithm, with the built environment represented by street canyons. The vertical profile of mean streamwise velocity is parametrized for the urban canopy and boundary layer. Flux footprints estimated by the proposed model show reasonable agreement with analytical predictions over flat surfaces without roughness elements, and with experimental observations over sparse plant canopies. Furthermore, comparisons of canyon flow and turbulence profiles and the subsequent footprints were made between the proposed model and large-eddy simulation data. The results suggest that the parametrized canyon wind and turbulence statistics, based on the simple similarity theory used, need to be further improved to yield more realistic urban footprint modelling.

  9. A new multistage groundwater transport inverse method: presentation, evaluation, and implications

    USGS Publications Warehouse

    Anderman, Evan R.; Hill, Mary C.

    1999-01-01

    More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.

  10. Analytical estimation of annual runoff distribution in ungauged seasonally dry basins based on a first order Taylor expansion of the Fu's equation

    NASA Astrophysics Data System (ADS)

    Caracciolo, D.; Deidda, R.; Viola, F.

    2017-11-01

    The assessment of the mean annual runoff and its interannual variability in a basin is the first and fundamental task for several activities related to water resources management and water quality analysis. The scarcity of observed runoff data is a common problem worldwide so that the runoff estimation in ungauged basins is still an open question. In this context, the main aim of this work is to propose and test a simple tool able to estimate the probability distribution of the annual surface runoff in ungauged river basins in arid and semi-arid areas using a simplified Fu's parameterization of the Budyko's curve at regional scale. Starting from a method recently developed to derive the distribution of annual runoff, under the assumption of negligible inter-annual change in basin water storage, we here generalize the application to any catchment where the parameter of the Fu's curve is known. Specifically, we provide a closed-form expression of the annual runoff distribution as a function of the mean and standard deviation of annual rainfall and potential evapotranspiration, and the Fu's parameter. The proposed method is based on a first order Taylor expansion of the Fu's equation and allows calculating the probability density function of annual runoff in seasonally dry arid and semi-arid geographic context around the world by taking advantage of simple easy-to-find climatic data and the many studies with estimates of the Fu's parameter worldwide. The computational simplicity of the proposed tool makes it a valuable supporting tool in the field of water resources assessment for practitioners, regional agencies and authorities.

  11. Laboratory Experiments and Modeling of Pooled NAPL Dissolution in Porous Media

    NASA Astrophysics Data System (ADS)

    Copty, N. K.; Sarikurt, D. A.; Gokdemir, C.

    2017-12-01

    The dissolution of non-aqueous phase liquids (NAPLs) entrapped in porous media is commonly modeled at the continuum scale as the product of a chemical potential and an interphase mass transfer coefficient, the latter expressed in terms of Sherwood correlations that are related to flow and porous media properties. Because of the lack of precise estimates of the interface area separating the NAPL and aqueous phase, numerous studies have lumped the interfacial area into the interphase mass transfer coefficient. In this paper controlled dissolution experiments from a pooled NAPL were conducted. The immobile NAPL mass is placed at the bottom of a flow cell filled with porous media with water flowing on top. Effluent aqueous phase concentrations were measured for a wide range of aqueous phase velocities and for two types of porous media. To interpret the experimental results, a two-dimensional pore network model of the NAPL dissolution was developed. The well-defined geometry of the NAPL-water interface and the observed effluent concentrations were used to compute best-fit mass transfer coefficients and non-lumped Sherwood correlations. Comparing the concentrations predicted with the pore network model to simple previously used one-dimensional analytic solutions indicates that the analytic model which ignores the transverse dispersion can lead to over-estimation of the mass transfer coefficient. The predicted Sherwood correlations are also compared to previously published data and implications on NAPL remediation strategies are discussed.

  12. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    NASA Astrophysics Data System (ADS)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  13. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    PubMed

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  14. Non-imaging ray-tracing for sputtering simulation with apodization

    NASA Astrophysics Data System (ADS)

    Ou, Chung-Jen

    2018-04-01

    Although apodization patterns have been adopted for the analysis of sputtering sources, the analytical solutions for the film thickness equations are yet limited to only simple conditions. Empirical formulations for thin film sputtering lacking the flexibility in dealing with multi-substrate conditions, a suitable cost-effective procedure is required to estimate the film thickness distribution. This study reports a cross-discipline simulation program, which is based on discrete particle Monte-Carlo methods and has been successfully applied to a non-imaging design to solve problems associated with sputtering uniformity. Robustness of the present method is first proved by comparing it with a typical analytical solution. Further, this report also investigates the overall all effects cause by the sizes of the deposited substrate, such that the determination of the distance between the target surface and the apodization index can be complete. This verifies the capability of the proposed method for solving the sputtering film thickness problems. The benefit is that an optical thin film engineer can, using the same optical software, design a specific optical component and consider the possible coating qualities with thickness tolerance, during the design stage.

  15. A graphical approach to electric sail mission design with radial thrust

    NASA Astrophysics Data System (ADS)

    Mengali, Giovanni; Quarta, Alessandro A.; Aliasi, Generoso

    2013-02-01

    This paper describes a semi-analytical approach to electric sail mission analysis under the assumption that the spacecraft experiences a purely radial, outward, propulsive acceleration. The problem is tackled by means of the potential well concept, a very effective idea that was originally introduced by Prussing and Coverstone in 1998. Unlike a classical procedure that requires the numerical integration of the equations of motion, the proposed method provides an estimate of the main spacecraft trajectory parameters, as its maximum and minimum attainable distance from the Sun, with the simple use of analytical relationships and elementary graphs. A number of mission scenarios clearly show the effectiveness of the proposed approach. In particular, when the spacecraft parking orbit is either circular or elliptic it is possible to find the optimal performances required to reach an escape condition or a given distance from the Sun. Another example is given by the optimal strategy required to reach a heliocentric Keplerian orbit of prescribed orbital period. Finally the graphical approach is applied to the preliminary design of a nodal mission towards a Near Earth Asteroid.

  16. Fast and direct analysis of Cr, Cd and Pb in brown sugar by GF AAS.

    PubMed

    Dos Santos, Jeferson M; Quináia, Sueli P; Felsner, Maria L

    2018-09-15

    A simple and fast analytical method for the determination of Cr, Pb and Cd in brown sugar by GF AAS using slurry sampling was developed and in house validated for the first time. Analytical curves were prepared by external standardization for Cr, and by matrix simulation for Pb and Cd and they were linear. Low limits of quantification for Cr (32.8 ng g -1 ), Pb (49.3 ng g -1 ) and Cd (4.5 ng g -1 ) were found. Repeatability and intermediate precision estimates (<10% and <15%, respectively) and recovery rates (95-103%) demonstrated a good precision and accuracy. The levels in brown sugar samples ranged from <32.8 to 160 ng g -1 for Cr, from <49.3 to 211.0 ng g -1 for Pb and from <4.5 to 7.0 ng g -1 for Cd and they may be assigned to anthropogenic activities and the adoption of inadequate practices of production and processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Concentration history during pumping from a leaky aquifer with stratified initial concentration

    USGS Publications Warehouse

    Goode, Daniel J.; Hsieh, Paul A.; Shapiro, Allen M.; Wood, Warren W.; Kraemer, Thomas F.

    1993-01-01

    Analytical and numerical solutions are employed to examine the concentration history of a dissolved substance in water pumped from a leaky aquifer. Many aquifer systems are characterized by stratification, for example, a sandy layer overlain by a clay layer. To obtain information about separate hydrogeologic units, aquifer pumping tests are often conducted with a well penetrating only one of the layers. When the initial concentration distribution is also stratified (the concentration varies with elevation only), the concentration breakthrough in the pumped well may be interpreted to provide information on aquifer hydraulic and transport properties. To facilitate this interpretation, we present some simple analytical and numerical solutions for limiting cases and illustrate their application to a fractured bedrock/glacial drift aquifer system where the solute of interest is dissolved radon gas. In addition to qualitative information on water source, this method may yield estimates of effective porosity and saturated thickness (or fracture transport aperture) from a single-hole test. Little information about dispersivity is obtained because the measured concentration is not significantly affected by dispersion in the aquifer.

  18. Mathematical model to estimate risk of calcium-containing renal stones

    NASA Technical Reports Server (NTRS)

    Pietrzyk, R. A.; Feiveson, A. H.; Whitson, P. A.

    1999-01-01

    BACKGROUND/AIMS: Astronauts exposed to microgravity during the course of spaceflight undergo physiologic changes that alter the urinary environment so as to increase the risk of renal stone formation. This study was undertaken to identify a simple method with which to evaluate the potential risk of renal stone development during spaceflight. METHOD: We used a large database of urinary risk factors obtained from 323 astronauts before and after spaceflight to generate a mathematical model with which to predict the urinary supersaturation of calcium stone forming salts. RESULT: This model, which involves the fewest possible analytical variables (urinary calcium, citrate, oxalate, phosphorus, and total volume), reliably and accurately predicted the urinary supersaturation of the calcium stone forming salts when compared to results obtained from a group of 6 astronauts who collected urine during flight. CONCLUSIONS: The use of this model will simplify both routine medical monitoring during spaceflight as well as the evaluation of countermeasures designed to minimize renal stone development. This model also can be used for Earth-based applications in which access to analytical resources is limited.

  19. Non-imaging ray-tracing for sputtering simulation with apodization

    NASA Astrophysics Data System (ADS)

    Ou, Chung-Jen

    2018-06-01

    Although apodization patterns have been adopted for the analysis of sputtering sources, the analytical solutions for the film thickness equations are yet limited to only simple conditions. Empirical formulations for thin film sputtering lacking the flexibility in dealing with multi-substrate conditions, a suitable cost-effective procedure is required to estimate the film thickness distribution. This study reports a cross-discipline simulation program, which is based on discrete particle Monte-Carlo methods and has been successfully applied to a non-imaging design to solve problems associated with sputtering uniformity. Robustness of the present method is first proved by comparing it with a typical analytical solution. Further, this report also investigates the overall all effects cause by the sizes of the deposited substrate, such that the determination of the distance between the target surface and the apodization index can be complete. This verifies the capability of the proposed method for solving the sputtering film thickness problems. The benefit is that an optical thin film engineer can, using the same optical software, design a specific optical component and consider the possible coating qualities with thickness tolerance, during the design stage.

  20. Joint nonlinearity effects in the design of a flexible truss structure control system

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1986-01-01

    Nonlinear effects are introduced in the dynamics of large space truss structures by the connecting joints which are designed with rather important tolerances to facilitate the assembly of the structures in space. The purpose was to develop means to investigate the nonlinear dynamics of the structures, particularly the limit cycles that might occur when active control is applied to the structures. An analytical method was sought and derived to predict the occurrence of limit cycles and to determine their stability. This method is mainly based on the quasi-linearization of every joint using describing functions. This approach was proven successful when simple dynamical systems were tested. Its applicability to larger systems depends on the amount of computations it requires, and estimates of the computational task tend to indicate that the number of individual sources of nonlinearity should be limited. Alternate analytical approaches, which do not account for every single nonlinearity, or the simulation of a simplified model of the dynamical system should, therefore, be investigated to determine a more effective way to predict limit cycles in large dynamical systems with an important number of distributed nonlinearities.

  1. Uncertainty evaluation of nuclear reaction model parameters using integral and microscopic measurements. Covariances evaluation with CONRAD code

    NASA Astrophysics Data System (ADS)

    de Saint Jean, C.; Habert, B.; Archier, P.; Noguere, G.; Bernard, D.; Tommasi, J.; Blaise, P.

    2010-10-01

    In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic) and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, …) were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.

  2. Vortex Core Size in the Rotor Near-Wake

    NASA Technical Reports Server (NTRS)

    Young, Larry A.

    2003-01-01

    Using a kinetic energy conservation approach, a number of simple analytic expressions are derived for estimating the core size of tip vortices in the near-wake of rotors in hover and axial-flow flight. The influence of thrust, induced power losses, advance ratio, and vortex structure on rotor vortex core size is assessed. Experimental data from the literature is compared to the analytical results derived in this paper. In general, three conclusions can be drawn from the work in this paper. First, the greater the rotor thrust, t h e larger the vortex core size in the rotor near-wake. Second, the more efficient a rotor is with respect to induced power losses, the smaller the resulting vortex core size. Third, and lastly, vortex core size initially decreases for low axial-flow advance ratios, but for large advance ratios core size asymptotically increases to a nominal upper limit. Insights gained from this work should enable improved modeling of rotary-wing aerodynamics, as well as provide a framework for improved experimental investigations of rotor a n d advanced propeller wakes.

  3. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  4. Quasiclassical calculations of blackbody-radiation-induced depopulation rates and effective lifetimes of Rydberg nS, nP, and nD alkali-metal atoms with n{<=}80

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beterov, I. I.; Ryabtsev, I. I.; Tretyakov, D. B.

    2009-05-15

    Rates of depopulation by blackbody radiation (BBR) and effective lifetimes of alkali-metal nS, nP, and nD Rydberg states have been calculated in a wide range of principal quantum numbers n{<=}80 at the ambient temperatures of 77, 300, and 600 K. Quasiclassical formulas were used to calculate the radial matrix elements of the dipole transitions from Rydberg states. Good agreement of our numerical results with the available theoretical and experimental data has been found. We have also obtained simple analytical formulas for estimates of effective lifetimes and BBR-induced depopulation rates, which well agree with the numerical data.

  5. The one-dimensional minesweeper game: What are your chances of winning?

    NASA Astrophysics Data System (ADS)

    Rodríguez-Achach, M.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Huerta-Quintanilla, R.; Canto-Lugo, E.

    2016-04-01

    Minesweeper is a famous computer game consisting usually in a two-dimensional lattice, where cells can be empty or mined and gamers are required to locate the mines without dying. Even if minesweeper seems to be a very simple system, it has some complex and interesting properties as NP-completeness. In this paper and for the one-dimensional case, given a lattice of n cells and m mines, we calculate the winning probability. By numerical simulations this probability is also estimated. We also find out by mean of these simulations that there exists a critical density of mines that minimize the probability of winning the game. Analytical results and simulations are compared showing a very good agreement.

  6. Backscattering and absorption coefficients for electrons: Solutions of invariant embedding transport equations using a method of convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, C.; Brizuela, H.; Heluani, S. P.

    2014-05-21

    The backscattering coefficient is a magnitude whose measurement is fundamental for the characterization of materials with techniques that make use of particle beams and particularly when performing microanalysis. In this work, we report the results of an analytic method to calculate the backscattering and absorption coefficients of electrons in similar conditions to those of electron probe microanalysis. Starting on a five level states ladder model in 3D, we deduced a set of integro-differential coupled equations of the coefficients with a method know as invariant embedding. By means of a procedure proposed by authors, called method of convergence, two types ofmore » approximate solutions for the set of equations, namely complete and simple solutions, can be obtained. Although the simple solutions were initially proposed as auxiliary forms to solve higher rank equations, they turned out to be also useful for the estimation of the aforementioned coefficients. In previous reports, we have presented results obtained with the complete solutions. In this paper, we present results obtained with the simple solutions of the coefficients, which exhibit a good degree of fit with the experimental data. Both the model and the calculation method presented here can be generalized to other techniques that make use of different sorts of particle beams.« less

  7. A Model-independent Photometric Redshift Estimator for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Wang, Yun

    2007-01-01

    The use of Type Ia supernovae (SNe Ia) as cosmological standard candles is fundamental in modern observational cosmology. In this Letter, we derive a simple empirical photometric redshift estimator for SNe Ia using a training set of SNe Ia with multiband (griz) light curves and spectroscopic redshifts obtained by the Supernova Legacy Survey (SNLS). This estimator is analytical and model-independent it does not use spectral templates. We use all the available SNe Ia from SNLS with near-maximum photometry in griz (a total of 40 SNe Ia) to train and test our photometric redshift estimator. The difference between the estimated redshifts zphot and the spectroscopic redshifts zspec, (zphot-zspec)/(1+zspec), has rms dispersions of 0.031 for 20 SNe Ia used in the training set, and 0.050 for 20 SNe Ia not used in the training set. The dispersion is of the same order of magnitude as the flux uncertainties at peak brightness for the SNe Ia. There are no outliers. This photometric redshift estimator should significantly enhance the ability of observers to accurately target high-redshift SNe Ia for spectroscopy in ongoing surveys. It will also dramatically boost the cosmological impact of very large future supernova surveys, such as those planned for the Advanced Liquid-mirror Probe for Astrophysics, Cosmology, and Asteroids (ALPACA) and the Large Synoptic Survey Telescope (LSST).

  8. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  9. Self-organized dynamics in local load-sharing fiber bundle models.

    PubMed

    Biswas, Soumyajyoti; Chakrabarti, Bikas K

    2013-10-01

    We study the dynamics of a local load-sharing fiber bundle model in two dimensions under an external load (which increases with time at a fixed slow rate) applied at a single point. Due to the local load-sharing nature, the redistributed load remains localized along the boundary of the broken patch. The system then goes to a self-organized state with a stationary average value of load per fiber along the (increasing) boundary of the broken patch (damaged region) and a scale-free distribution of avalanche sizes and other related quantities are observed. In particular, when the load redistribution is only among nearest surviving fiber(s), the numerical estimates of the exponent values are comparable with those of the Manna model. When the load redistribution is uniform along the patch boundary, the model shows a simple mean-field limit of this self-organizing critical behavior, for which we give analytical estimates of the saturation load per fiber values and avalanche size distribution exponent. These are in good agreement with numerical simulation results.

  10. Hardwall acoustical characteristics and measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel

    NASA Technical Reports Server (NTRS)

    Rentz, P. E.

    1976-01-01

    Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.

  11. How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.

    PubMed

    Hittner, James B; May, Kim

    2012-01-01

    The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.

  12. Recovering Galaxy Properties Using Gaussian Process SED Fitting

    NASA Astrophysics Data System (ADS)

    Iyer, Kartheik; Awan, Humna

    2018-01-01

    Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.

  13. The influence of strain rate and the effect of friction on the forging load in simple upsetting and closed die forging

    NASA Astrophysics Data System (ADS)

    Klemz, Francis B.

    Forging provides an elegant solution to the problem of producing complicated shapes from heated metal. This study attempts to relate some of the important parameters involved when considering, simple upsetting, closed die forging and extrusion forging.A literature survey showed some of the empirical graphical and statistical methods of load prediction together with analytical methods of estimating load and energy. Investigations of the effects of high strain rate and temperature on the stress-strain properties of materials are also evident.In the present study special equipment including an experimental drop hammer and various die-sets have been designed and manufactured. Instrumentation to measure load/time and displacement/time behaviour, of the deformed metal, has been incorporated and calibrated. A high speed camera was used to record the behaviour mode of test pieces used in the simple upsetting tests.Dynamic and quasi-static material properties for the test materials, lead and aluminium alloy, were measured using the drop-hammer and a compression-test machine.Analytically two separate mathematical solutions have been developed: A numerical technique using a lumped-massmodel for the analysis of simple upsetting and closed-die forging and, for extrusion forging, an analysis which equates the shear and compression energy requirements tothe work done by the forging load.Cylindrical test pieces were used for all the experiments and both dry and lubricated test conditions were investigated. The static and dynamic tests provide data on Load, Energy and the Profile of the deformed billet. In addition for the Extrusion Forging, both single ended and double ended tests were conducted. Material dependency was also examined by a further series of tests on aluminium and copper.Comparison of the experimental and theoretical results was made which shows clearly the effects of friction and high strain rate on load and energy requirements and the deformation mode of the billet. For the axisymmetric shapes considered, it was found that the load, energy requirement and profile could be predicted with reasonable accuracy.

  14. On the analytical form of the Earth's magnetic attraction expressed as a function of time

    NASA Technical Reports Server (NTRS)

    Carlheim-Gyllenskold, V.

    1983-01-01

    An attempt is made to express the Earth's magnetic attraction in simple analytical form using observations during the 16th to 19th centuries. Observations of the magnetic inclination in the 16th and 17th centuries are discussed.

  15. Single Particle-Inductively Coupled Plasma Mass Spectroscopy Analysis of Metallic Nanoparticles in Environmental Samples with Large Dissolved Analyte Fractions.

    PubMed

    Schwertfeger, D M; Velicogna, Jessica R; Jesmer, Alexander H; Scroggins, Richard P; Princz, Juliska I

    2016-10-18

    There is an increasing interest to use single particle-inductively coupled plasma mass spectroscopy (SP-ICPMS) to help quantify exposure to engineered nanoparticles, and their transformation products, released into the environment. Hindering the use of this analytical technique for environmental samples is the presence of high levels of dissolved analyte which impedes resolution of the particle signal from the dissolved. While sample dilution is often necessary to achieve the low analyte concentrations necessary for SP-ICPMS analysis, and to reduce the occurrence of matrix effects on the analyte signal, it is used here to also reduce the dissolved signal relative to the particulate, while maintaining a matrix chemistry that promotes particle stability. We propose a simple, systematic dilution series approach where by the first dilution is used to quantify the dissolved analyte, the second is used to optimize the particle signal, and the third is used as an analytical quality control. Using simple suspensions of well characterized Au and Ag nanoparticles spiked with the dissolved analyte form, as well as suspensions of complex environmental media (i.e., extracts from soils previously contaminated with engineered silver nanoparticles), we show how this dilution series technique improves resolution of the particle signal which in turn improves the accuracy of particle counts, quantification of particulate mass and determination of particle size. The technique proposed here is meant to offer a systematic and reproducible approach to the SP-ICPMS analysis of environmental samples and improve the quality and consistency of data generated from this relatively new analytical tool.

  16. Discussion and revision of the mathematical modeling tool described in the previously published article "Modeling HIV Transmission risk among Mozambicans prior to their initiating highly active antiretroviral therapy".

    PubMed

    Cassels, Susan; Pearson, Cynthia R; Kurth, Ann E; Martin, Diane P; Simoni, Jane M; Matediana, Eduardo; Gloyd, Stephen

    2009-07-01

    Mathematical models are increasingly used in social and behavioral studies of HIV transmission; however, model structures must be chosen carefully to best answer the question at hand and conclusions must be interpreted cautiously. In Pearson et al. (2007), we presented a simple analytically tractable deterministic model to estimate the number of secondary HIV infections stemming from a population of HIV-positive Mozambicans and to evaluate how the estimate would change under different treatment and behavioral scenarios. In a subsequent application of the model with a different data set, we observed that the model produced an unduly conservative estimate of the number of new HIV-1 infections. In this brief report, our first aim is to describe a revision of the model to correct for this underestimation. Specifically, we recommend adjusting the population-level sexually transmitted infection (STI) parameters to be applicable to the individual-level model specification by accounting for the proportion of individuals uninfected with an STI. In applying the revised model to the original data, we noted an estimated 40 infections/1000 HIV-positive persons per year (versus the original 23 infections/1000 HIV-positive persons per year). In addition, the revised model estimated that highly active antiretroviral therapy (HAART) along with syphilis and herpes simplex virus type 2 (HSV-2) treatments combined could reduce HIV-1 transmission by 72% (versus 86% according to the original model). The second aim of this report is to discuss the advantages and disadvantages of mathematical models in the field and the implications of model interpretation. We caution that simple models should be used for heuristic purposes only. Since these models do not account for heterogeneity in the population and significantly simplify HIV transmission dynamics, they should be used to describe general characteristics of the epidemic and demonstrate the importance or sensitivity of parameters in the model.

  17. Theoretical aspects of tidal and planetary wave propagation at thermospheric heights

    NASA Technical Reports Server (NTRS)

    Volland, H.; Mayr, H. G.

    1977-01-01

    A simple semiquantitative model is presented which allows analytic solutions of tidal and planetary wave propagation at thermospheric heights. This model is based on perturbation approximation and mode separation. The effects of viscosity and heat conduction are parameterized by Rayleigh friction and Newtonian cooling. Because of this simplicity, one gains a clear physical insight into basic features of atmospheric wave propagation. In particular, we discuss the meridional structures of pressure and horizontal wind (the solutions of Laplace's equation) and their modification due to dissipative effects at thermospheric heights. Furthermore, we solve the equations governing the height structure of the wave modes and arrive at a very simple asymptotic solution valid in the upper part of the thermosphere. That 'system transfer function' of the thermosphere allows one to estimate immediately the reaction of the thermospheric wave mode parameters such as pressure, temperature, and winds to an external heat source of arbitrary temporal and spatial distribution. Finally, the diffusion effects of the minor constituents due to the global wind circulation are discussed, and some results of numerical calculations are presented.

  18. Analytical detection and method development of anticancer drug Gemcitabine HCl using gold nanoparticles.

    PubMed

    Menon, Shobhana K; Mistry, Bhoomika R; Joshi, Kuldeep V; Sutariya, Pinkesh G; Patel, Ravindra V

    2012-08-01

    A simple, rapid, cost effective and extractive UV spectrophotometric method was developed for the determination of Gemcitabine HCl (GMCT) in bulk drug and pharmaceutical formulation. It was based on UV spectrophotometric measurements in which the drug reacts with gold nanoparticles (AuNP) and changes the original colour of AuNP and forms a dark blue coloured solution which exhibits absorption maximum at 688nm. The apparent molar absorptivity and Sandell's sensitivity coefficient were found to be 3.95×10(-5)lmol(-1)cm(-1) and 0.060μgcm(-2) respectively. Beer's law was obeyed in the concentration range of 2.0-40μgml(-1). This method was tested and validated for various parameters according to ICH guidelines. The proposed method was successfully applied for the determination of GMCT in pharmaceutical formulation (parental formulation). The results demonstrated that the procedure is accurate, precise and reproducible (relative standard deviation <2%). As it is simple, cheap and less time consuming, it can be suitably applied for the estimation of GMCT in dosage forms. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Facile fabrication of CNT-based chemical sensor operating at room temperature

    NASA Astrophysics Data System (ADS)

    Sheng, Jiadong; Zeng, Xian; Zhu, Qi; Yang, Zhaohui; Zhang, Xiaohua

    2017-12-01

    This paper describes a simple, low cost and effective route to fabricate CNT-based chemical sensors, which operate at room temperature. Firstly, the incorporation of silk fibroin in vertically aligned CNT arrays (CNTA) obtained through a thermal chemical vapor deposition (CVD) method makes the direct removal of CNT arrays from substrates without any rigorous acid or sonication treatment feasible. Through a simple one-step in situ polymerization of anilines, the functionalization of CNT arrays with polyaniline (PANI) significantly improves the sensing performance of CNT-based chemical sensors in detecting ammonia (NH3) and hydrogen chloride (HCl) vapors. Chemically modified CNT arrays also show responses to organic vapors like menthol, ethyl acetate and acetone. Although the detection limits of chemically modified CNT-based chemical sensors are of the same orders of magnitudes reported in previous studies, these CNT-based chemical sensors show advantages of simplicity, low cost and energy efficiency in preparation and fabrication of devices. Additionally, a linear relationship between the relative sensitivity and concentration of analyte makes precise estimations on the concentrations of trace chemical vapors possible.

  20. Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2006-04-01

    We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.

  1. Analytical evaluation of current starch methods used in the international sugar industry: Part I

    USDA-ARS?s Scientific Manuscript database

    Several analytical starch methods currently exist in the international sugar industry that are used to prevent or mitigate starch-related processing challenges as well as assess the quality of traded end-products. These methods use simple iodometric chemistry, mostly potato starch standards, and uti...

  2. 10 CFR 436.23 - Estimated simple payback time.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...

  3. 10 CFR 436.23 - Estimated simple payback time.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...

  4. 10 CFR 436.23 - Estimated simple payback time.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...

  5. 10 CFR 436.23 - Estimated simple payback time.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...

  6. 10 CFR 436.23 - Estimated simple payback time.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...

  7. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  8. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  9. Concept design theory and model for multi-use space facilities: Analysis of key system design parameters through variance of mission requirements

    NASA Astrophysics Data System (ADS)

    Reynerson, Charles Martin

    This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.

  10. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    PubMed

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. A green method for the quantification of plastics-derived endocrine disruptors in beverages by chemometrics-assisted liquid chromatography with simultaneous diode array and fluorescent detection.

    PubMed

    Vidal, Rocío B Pellegrino; Ibañez, Gabriela A; Escandar, Graciela M

    2016-10-01

    The aim of this study was to develop a novel analytical method for the determination of bisphenol A, nonylphenol, octylphenol, diethyl phthalate, dibutyl phthalate and diethylhexyl phthalate, compounds known for their endocrine-disruptor properties, based on liquid chromatography with simultaneous diode array and fluorescent detection. Following the principles of green analytical chemistry, solvent consumption and chromatographic run time were minimized. To deal with the resulting incomplete resolution in the chromatograms, a second-order calibration was proposed. Second-order data (elution time-absorbance wavelength and elution time-fluorescence emission wavelength matrices) were obtained and processed by multivariate curve resolution-alternating least-squares (MCR-ALS). Applying MCR-ALS allowed quantification of the analytes even in the presence of partially overlapped chromatographic and spectral bands among these compounds and the potential interferents. The obtained results from the analysis of beer, wine, soda, juice, water and distilled beverage samples were compared with gas chromatography-mass spectrometry (GC-MS). Limits of detection (LODs) in the range 0.04-0.38ngmL(-1) were estimated in real samples after a very simple solid-phase extraction. All the samples were found to contain at least three EDs, in concentrations as high as 334ngmL(-1). Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A rapid and sensitive analytical method for the determination of 14 pyrethroids in water samples.

    PubMed

    Feo, M L; Eljarrat, E; Barceló, D

    2010-04-09

    A simple, efficient and environmentally friendly analytical methodology is proposed for extracting and preconcentrating pyrethroids from water samples prior to gas chromatography-negative ion chemical ionization mass spectrometry (GC-NCI-MS) analysis. Fourteen pyrethroids were selected for this work: bifenthrin, cyfluthrin, lambda-cyhalothrin, cypermethrin, deltamethrin, esfenvalerate, fenvalerate, fenpropathrin, tau-fluvalinate, permethrin, phenothrin, resmethrin, tetramethrin and tralomethrin. The method is based on ultrasound-assisted emulsification-extraction (UAEE) of a water-immiscible solvent in an aqueous medium. Chloroform was used as extraction solvent in the UAEE technique. Target analytes were quantitatively extracted achieving an enrichment factor of 200 when 20 mL aliquot of pure water spiked with pyrethroid standards was extracted. The method was also evaluated with tap water and river water samples. Method detection limits (MDLs) ranged from 0.03 to 35.8 ng L(-1) with RSDs values < or =3-25% (n=5). The coefficients of estimation of the calibration curves obtained following the proposed methodology were > or =0.998. Recovery values were in the range of 45-106%, showing satisfactory robustness of the method for analyzing pyrethroids in water samples. The proposed methodology was applied for the analysis of river water samples. Cypermethrin was detected at concentration levels ranging from 4.94 to 30.5 ng L(-1). Copyright 2010 Elsevier B.V. All rights reserved.

  13. Determination of Glyphosate, Maleic Hydrazide, Fosetyl Aluminum, and Ethephon in Grapes by Liquid Chromatography/Tandem Mass Spectrometry.

    PubMed

    Chamkasem, Narong

    2017-08-30

    A simple high-throughput liquid chromatography/tandem mass spectrometry (LC-MS-MS) method was developed for the determination of maleic hydrazide, glyphosate, fosetyl aluminum, and ethephon in grapes using a reversed-phase column with weak anion-exchange and cation-exchange mixed mode. A 5 g test portion was shaken with 50 mM HOAc and 10 mM Na 2 EDTA in 1/3 (v/v) MeOH/H 2 O for 10 min. After centrifugation, the extract was passed through an Oasis HLB cartridge to retain suspended particulates and nonpolar interferences. The final solution was injected and directly analyzed in 17 min by LC-MS-MS. Two MS-MS transitions were monitored in the method for each target compound to achieve true positive identification. Four isotopically labeled internal standards corresponding to each analyte were used to correct for matrix suppression effects and/or instrument signal drift. The linearity of the detector response was demonstrated in the range from 10 to 1000 ng/mL for each analyte with a coefficient of determination (R 2 ) of ≥0.995. The average recovery for all analytes at 100, 500, and 2000 ng/g (n = 5) ranged from 87 to 111%, with a relative standard deviation of less than 17%. The estimated LOQs for maleic hydrazide, glyphosate, fosetyl-Al, and ethephon were 38, 19, 29, and 34 ng/g, respectively.

  14. An Analysis Model for Water Cone Subsidence in Bottom Water Drive Reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Jianjun; Xu, Hui; Wu, Shucheng; Yang, Chao; Kong, lingxiao; Zeng, Baoquan; Xu, Haixia; Qu, Tailai

    2017-12-01

    Water coning in bottom water drive reservoirs, which will result in earlier water breakthrough, rapid increase in water cut and low recovery level, has drawn tremendous attention in petroleum engineering field. As one simple and effective method to inhibit bottom water coning, shut-in coning control is usually preferred in oilfield to control the water cone and furthermore to enhance economic performance. However, most of the water coning researchers just have been done on investigation of the coning behavior as it grows up, the reported studies for water cone subsidence are very scarce. The goal of this work is to present an analytical model for water cone subsidence to analyze the subsidence of water cone when the well shut in. Based on Dupuit critical oil production rate formula, an analytical model is developed to estimate the initial water cone shape at the point of critical drawdown. Then, with the initial water cone shape equation, we propose an analysis model for water cone subsidence in bottom water reservoir reservoirs. Model analysis and several sensitivity studies are conducted. This work presents accurate and fast analytical model to perform the water cone subsidence in bottom water drive reservoirs. To consider the recent interests in development of bottom drive reservoirs, our approach provides a promising technique for better understanding the subsidence of water cone.

  15. Robotic voltammetry with carbon nanotube-based sensors: a superb blend for convenient high-quality antimicrobial trace analysis

    PubMed Central

    Theanponkrang, Somjai; Suginta, Wipa; Weingart, Helge; Winterhalter, Mathias; Schulte, Albert

    2015-01-01

    A new automated pharmacoanalytical technique for convenient quantification of redox-active antibiotics has been established by combining the benefits of a carbon nanotube (CNT) sensor modification with electrocatalytic activity for analyte detection with the merits of a robotic electrochemical device that is capable of sequential nonmanual sample measurements in 24-well microtiter plates. Norfloxacin (NFX) and ciprofloxacin (CFX), two standard fluoroquinolone antibiotics, were used in automated calibration measurements by differential pulse voltammetry (DPV) and accomplished were linear ranges of 1–10 μM and 2–100 μM for NFX and CFX, respectively. The lowest detectable levels were estimated to be 0.3±0.1 μM (n=7) for NFX and 1.6±0.1 μM (n=7) for CFX. In standard solutions or tablet samples of known content, both analytes could be quantified with the robotic DPV microtiter plate assay, with recoveries within ±4% of 100%. And recoveries were as good when NFX was evaluated in human serum samples with added NFX. The use of simple instrumentation, convenience in execution, and high effectiveness in analyte quantitation suggest the merger between automated microtiter plate voltammetry and CNT-supported electrochemical drug detection as a novel methodology for antibiotic testing in pharmaceutical and clinical research and quality control laboratories. PMID:25670899

  16. Is the Jeffreys' scale a reliable tool for Bayesian model comparison in cosmology?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesseris, Savvas; García-Bellido, Juan, E-mail: savvas.nesseris@uam.es, E-mail: juan.garciabellido@uam.es

    2013-08-01

    We are entering an era where progress in cosmology is driven by data, and alternative models will have to be compared and ruled out according to some consistent criterium. The most conservative and widely used approach is Bayesian model comparison. In this paper we explicitly calculate the Bayes factors for all models that are linear with respect to their parameters. We do this in order to test the so called Jeffreys' scale and determine analytically how accurate its predictions are in a simple case where we fully understand and can calculate everything analytically. We also discuss the case of nestedmore » models, e.g. one with M{sub 1} and another with M{sub 2} superset of M{sub 1} parameters and we derive analytic expressions for both the Bayes factor and the figure of Merit, defined as the inverse area of the model parameter's confidence contours. With all this machinery and the use of an explicit example we demonstrate that the threshold nature of Jeffreys' scale is not a ''one size fits all'' reliable tool for model comparison and that it may lead to biased conclusions. Furthermore, we discuss the importance of choosing the right basis in the context of models that are linear with respect to their parameters and how that basis affects the parameter estimation and the derived constraints.« less

  17. Analytical Computation of Energy-Energy Correlation at Next-to-Leading Order in QCD [The Energy-Energy Correlation at Next-to-Leading Order in QCD, Analytically

    DOE PAGES

    Dixon, Lance J.; Luo, Ming-xing; Shtabovenko, Vladyslav; ...

    2018-03-09

    Here, the energy-energy correlation (EEC) between two detectors in e +e – annihilation was computed analytically at leading order in QCD almost 40 years ago, and numerically at next-to-leading order (NLO) starting in the 1980s. We present the first analytical result for the EEC at NLO, which is remarkably simple, and facilitates analytical study of the perturbative structure of the EEC. We provide the expansion of the EEC in the collinear and back-to-back regions through next-to-leading power, information which should aid resummation in these regions.

  18. Analytical Computation of Energy-Energy Correlation at Next-to-Leading Order in QCD [The Energy-Energy Correlation at Next-to-Leading Order in QCD, Analytically

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Lance J.; Luo, Ming-xing; Shtabovenko, Vladyslav

    Here, the energy-energy correlation (EEC) between two detectors in e +e – annihilation was computed analytically at leading order in QCD almost 40 years ago, and numerically at next-to-leading order (NLO) starting in the 1980s. We present the first analytical result for the EEC at NLO, which is remarkably simple, and facilitates analytical study of the perturbative structure of the EEC. We provide the expansion of the EEC in the collinear and back-to-back regions through next-to-leading power, information which should aid resummation in these regions.

  19. Retention prediction and separation optimization under multilinear gradient elution in liquid chromatography with Microsoft Excel macros.

    PubMed

    Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P

    2015-05-22

    A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  1. Methodology for estimating human perception to tremors in high-rise buildings

    NASA Astrophysics Data System (ADS)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  2. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  3. The two-dimensional Monte Carlo: a new methodologic paradigm for dose reconstruction for epidemiological studies.

    PubMed

    Simon, Steven L; Hoffman, F Owen; Hofer, Eduard

    2015-01-01

    Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.

  4. A simple formula for the effective complex conductivity of periodic fibrous composites with interfacial impedance and applications to biological tissues

    NASA Astrophysics Data System (ADS)

    Bisegna, Paolo; Caselli, Federica

    2008-06-01

    This paper presents a simple analytical expression for the effective complex conductivity of a periodic hexagonal arrangement of conductive circular cylinders embedded in a conductive matrix, with interfaces exhibiting a capacitive impedance. This composite material may be regarded as an idealized model of a biological tissue comprising tubular cells, such as skeletal muscle. The asymptotic homogenization method is adopted, and the corresponding local problem is solved by resorting to Weierstrass elliptic functions. The effectiveness of the present analytical result is proved by convergence analysis and comparison with finite-element solutions and existing models.

  5. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  6. Bias and precision of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1984

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1987-01-01

    The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)

  7. A simple analytical thermo-mechanical model for liquid crystal elastomer bilayer structures

    NASA Astrophysics Data System (ADS)

    Cui, Yun; Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Li, Yuhang; Xing, Yufeng; Yu, Cunjiang; Song, Jizhou

    2018-02-01

    The bilayer structure consisting of thermal-responsive liquid crystal elastomers (LCEs) and other polymer materials with stretchable heaters has attracted much attention in applications of soft actuators and soft robots due to its ability to generate large deformations when subjected to heat stimuli. A simple analytical thermo-mechanical model, accounting for the non-uniform feature of the temperature/strain distribution along the thickness direction, is established for this type of bilayer structure. The analytical predictions of the temperature and bending curvature radius agree well with finite element analysis and experiments. The influences of the LCE thickness and the heat generation power on the bending deformation of the bilayer structure are fully investigated. It is shown that a thinner LCE layer and a higher heat generation power could yield more bending deformation. These results may help the design of soft actuators and soft robots involving thermal responsive LCEs.

  8. Statistical Properties of Real-Time Amplitude Estimate of Harmonics Affected by Frequency Instability

    NASA Astrophysics Data System (ADS)

    Bellan, Diego; Pignari, Sergio A.

    2016-07-01

    This work deals with the statistical characterization of real-time digital measurement of the amplitude of harmonics affected by frequency instability. In fact, in modern power systems both the presence of harmonics and frequency instability are well-known and widespread phenomena mainly due to nonlinear loads and distributed generation, respectively. As a result, real-time monitoring of voltage/current frequency spectra is of paramount importance as far as power quality issues are addressed. Within this framework, a key point is that in many cases real-time continuous monitoring prevents the application of sophisticated algorithms to extract all the information from the digitized waveforms because of the required computational burden. In those cases only simple evaluations such as peak search of discrete Fourier transform are implemented. It is well known, however, that a slight change in waveform frequency results in lack of sampling synchronism and uncertainty in amplitude estimate. Of course the impact of this phenomenon increases with the order of the harmonic to be measured. In this paper an approximate analytical approach is proposed in order to describe the statistical properties of the measured magnitude of harmonics affected by frequency instability. By providing a simplified description of the frequency behavior of the windows used against spectral leakage, analytical expressions for mean value, variance, cumulative distribution function, and probability density function of the measured harmonics magnitude are derived in closed form as functions of waveform frequency treated as a random variable.

  9. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  10. QbD-Based Development and Validation of a Stability-Indicating HPLC Method for Estimating Ketoprofen in Bulk Drug and Proniosomal Vesicular System.

    PubMed

    Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju

    2016-03-01

    The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Imaging through atmospheric turbulence for laser based C-RAM systems: an analytical approach

    NASA Astrophysics Data System (ADS)

    Buske, Ivo; Riede, Wolfgang; Zoz, Jürgen

    2013-10-01

    High Energy Laser weapons (HEL) have unique attributes which distinguish them from limitations of kinetic energy weapons. HEL weapons engagement process typical starts with identifying the target and selecting the aim point on the target through a high magnification telescope. One scenario for such a HEL system is the countermeasure against rockets, artillery or mortar (RAM) objects to protect ships, camps or other infrastructure from terrorist attacks. For target identification and especially to resolve the aim point it is significant to ensure high resolution imaging of RAM objects. During the whole ballistic flight phase the knowledge about the expectable imaging quality is important to estimate and evaluate the countermeasure system performance. Hereby image quality is mainly influenced by unavoidable atmospheric turbulence. Analytical calculations have been taken to analyze and evaluate image quality parameters during an approaching RAM object. In general, Kolmogorov turbulence theory was implemented to determine atmospheric coherence length and isoplanatic angle. The image acquisition is distinguishing between long and short exposure times to characterize tip/tilt image shift and the impact of high order turbulence fluctuations. Two different observer positions are considered to show the influence of the selected sensor site. Furthermore two different turbulence strengths are investigated to point out the effect of climate or weather condition. It is well known that atmospheric turbulence degenerates image sharpness and creates blurred images. Investigations are done to estimate the effectiveness of simple tip/tilt systems or low order adaptive optics for laser based C-RAM systems.

  12. Analytical Model for Mean Flow and Fluxes of Momentum and Energy in Very Large Wind Farms

    NASA Astrophysics Data System (ADS)

    Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando

    2018-01-01

    As wind-turbine arrays continue to be installed and the array size continues to grow, there is an increasing need to represent very large wind-turbine arrays in numerical weather prediction models, for wind-farm optimization, and for environmental assessment. We propose a simple analytical model for boundary-layer flow in fully-developed wind-turbine arrays, based on the concept of sparsely-obstructed shear flows. In describing the vertical distribution of the mean wind speed and shear stress within wind farms, our model estimates the mean kinetic energy harvested from the atmospheric boundary layer, and determines the partitioning between the wind power captured by the wind turbines and that absorbed by the underlying land or water. A length scale based on the turbine geometry, spacing, and performance characteristics, is able to estimate the asymptotic limit for the fully-developed flow through wind-turbine arrays, and thereby determine if the wind-farm flow is fully developed for very large turbine arrays. Our model is validated using data collected in controlled wind-tunnel experiments, and its usefulness for the prediction of wind-farm performance and optimization of turbine-array spacing are described. Our model may also be useful for assessing the extent to which the extraction of wind power affects the land-atmosphere coupling or air-water exchange of momentum, with implications for the transport of heat, moisture, trace gases such as carbon dioxide, methane, and nitrous oxide, and ecologically important oxygen.

  13. The Lyapunov dimension and its estimation via the Leonov method

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. V.

    2016-06-01

    Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.

  14. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    PubMed

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  15. Using Presentation Software to Flip an Undergraduate Analytical Chemistry Course

    ERIC Educational Resources Information Center

    Fitzgerald, Neil; Li, Luisa

    2015-01-01

    An undergraduate analytical chemistry course has been adapted to a flipped course format. Course content was provided by video clips, text, graphics, audio, and simple animations organized as concept maps using the cloud-based presentation platform, Prezi. The advantages of using Prezi to present course content in a flipped course format are…

  16. Data Acquisition Programming (LabVIEW): An Aid to Teaching Instrumental Analytical Chemistry.

    ERIC Educational Resources Information Center

    Gostowski, Rudy

    A course was developed at Austin Peay State University (Tennessee) which offered an opportunity for hands-on experience with the essential components of modern analytical instruments. The course aimed to provide college students with the skills necessary to construct a simple model instrument, including the design and fabrication of electronic…

  17. Operational Environmental Assessment

    DTIC Science & Technology

    1988-09-01

    Chemistry Branch - Physical Chemistry Branch " Analytical Research Division - Analytical Systems Branch - Methodology Research Branch - Spectroscopy Branch...electromagnetic frequency spec- trum and includes radio frequencies, infrared , visible light, ultraviolet, X-rays and gamma rays (in ascending order of...Verruculogen Aflatrem Picrotoxin Ciguatoxin Mycotoxins Simple Tr ichothecenes T-2 Toxin T-2 Tetraol Neosolaniol * Nivalenol Deoxynivalenol Verrucarol B-3 B lank

  18. Numerical Simulation of the Perrin-Like Experiments

    ERIC Educational Resources Information Center

    Mazur, Zygmunt; Grech, Dariusz

    2008-01-01

    A simple model of the random Brownian walk of a spherical mesoscopic particle in viscous liquids is proposed. The model can be solved analytically and simulated numerically. The analytic solution gives the known Einstein-Smoluchowski diffusion law r[superscript 2] = 2Dt, where the diffusion constant D is expressed by the mass and geometry of a…

  19. Quantitative Ultrasound-Assisted Extraction for Trace-Metal Determination: An Experiment for Analytical Chemistry

    ERIC Educational Resources Information Center

    Lavilla, Isela; Costas, Marta; Pena-Pereira, Francisco; Gil, Sandra; Bendicho, Carlos

    2011-01-01

    Ultrasound-assisted extraction (UAE) is introduced to upper-level analytical chemistry students as a simple strategy focused on sample preparation for trace-metal determination in biological tissues. Nickel extraction in seafood samples and quantification by electrothermal atomic absorption spectrometry (ETAAS) are carried out by a team of four…

  20. Boundary condition determined wave functions for the ground states of one- and two-electron homonuclear molecules

    NASA Astrophysics Data System (ADS)

    Patil, S. H.; Tang, K. T.; Toennies, J. P.

    1999-10-01

    Simple analytical wave functions satisfying appropriate boundary conditions are constructed for the ground states of one-and two-electron homonuclear molecules. Both the asymptotic condition when one electron is far away and the cusp condition when the electron coalesces with a nucleus are satisfied by the proposed wave function. For H2+, the resulting wave function is almost identical to the Guillemin-Zener wave function which is known to give very good energies. For the two electron systems H2 and He2++, the additional electron-electron cusp condition is rigorously accounted for by a simple analytic correlation function which has the correct behavior not only for r12→0 and r12→∞ but also for R→0 and R→∞, where r12 is the interelectronic distance and R, the internuclear distance. Energies obtained from these simple wave functions agree within 2×10-3 a.u. with the results of the most sophisticated variational calculations for all R and for all systems studied. This demonstrates that rather simple physical considerations can be used to derive very accurate wave functions for simple molecules thereby avoiding laborious numerical variational calculations.

  1. Long Term Evolution of Planetary Systems with a Terrestrial Planet and a Giant Planet

    NASA Technical Reports Server (NTRS)

    Georgakarakos, Nikolaos; Dobbs-Dixon, Ian; Way, Michael J.

    2016-01-01

    We study the long term orbital evolution of a terrestrial planet under the gravitational perturbations of a giant planet. In particular, we are interested in situations where the two planets are in the same plane and are relatively close. We examine both possible configurations: the giant planet orbit being either outside or inside the orbit of the smaller planet. The perturbing potential is expanded to high orders and an analytical solution of the terrestrial planetary orbit is derived. The analytical estimates are then compared against results from the numerical integration of the full equations of motion and we find that the analytical solution works reasonably well. An interesting finding is that the new analytical estimates improve greatly the predictions for the timescales of the orbital evolution of the terrestrial planet compared to an octupole order expansion. Finally, we briefly discuss possible applications of the analytical estimates in astrophysical problems.

  2. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    PubMed

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. All-organic microelectromechanical systems integrating specific molecular recognition--a new generation of chemical sensors.

    PubMed

    Ayela, Cédric; Dubourg, Georges; Pellet, Claude; Haupt, Karsten

    2014-09-03

    Cantilever-type all-organic microelectromechanical systems based on molecularly imprinted polymers for specific analyte recognition are used as chemical sensors. They are produced by a simple spray-coating-shadow-masking process. Analyte binding to the cantilever generates a measurable change in its resonance frequency. This allows label-free detection by direct mass sensing of low-molecular-weight analytes at nanomolar concentrations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A portable gas sensor based on cataluminescence.

    PubMed

    Kang, C; Tang, F; Liu, Y; Wu, Y; Wang, X

    2013-01-01

    We describe a portable gas sensor based on cataluminescence. Miniaturization of the gas sensor was achieved by using a miniature photomultiplier tube, a miniature gas pump and a simple light seal. The signal to noise ratio (SNR) was considered as the evaluation criteria for the design and testing of the sensor. The main source of noise was from thermal background. Optimal working temperature and flow rate were determined experimentally from the viewpoint of improvement in SNR. A series of parameters related to analytical performance was estimated. The limitation of detection of the sensor was 7 ppm (SNR = 3) for ethanol and 10 ppm (SNR = 3) for hydrogen sulphide. Zirconia and barium carbonate were respectively selected as nano-sized catalysts for ethanol and hydrogen sulphide. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Laser control of electronic transitions of wave packet by using quadratically chirped pulses.

    PubMed

    Zou, Shiyang; Kondorskiy, Alexey; Mil'nikov, Gennady; Nakamura, Hiroki

    2005-02-22

    An effective scheme is proposed for the laser control of wave packet dynamics. It is demonstrated that by using specially designed quadratically chirped pulses, fast and nearly complete excitation of wave packet can be achieved without significant distortion of its shape. The parameters of the laser pulse can be estimated analytically from the Zhu-Nakamura theory of nonadiabatic transition. If the wave packet is not too narrow or not too broad, then the scheme is expected to be utilizable for multidimensional systems. The scheme is applicable to various processes such as simple electronic excitation, pump-dump, and selective bond breaking, and it is actually numerically demonstrated to work well by taking diatomic and triatomic molecules (LiH, NaK, H(2)O) as examples.

  6. Maximum work extraction and implementation costs for nonequilibrium Maxwell's demons.

    PubMed

    Sandberg, Henrik; Delvenne, Jean-Charles; Newton, Nigel J; Mitter, Sanjoy K

    2014-10-01

    We determine the maximum amount of work extractable in finite time by a demon performing continuous measurements on a quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the information extracted from the system. The maximum work demon is found to apply a high-gain continuous feedback involving a Kalman-Bucy estimate of the system state and operates in nonequilibrium. A simple and concrete electrical implementation of the feedback protocol is proposed, which allows for analytic expressions of the flows of energy, entropy, and information inside the demon. This let us show that any implementation of the demon must necessarily include an external power source, which we prove both from classical thermodynamics arguments and from a version of Landauer's memory erasure argument extended to nonequilibrium linear systems.

  7. Laser control of electronic transitions of wave packet by using quadratically chirped pulses

    NASA Astrophysics Data System (ADS)

    Zou, Shiyang; Kondorskiy, Alexey; Mil'nikov, Gennady; Nakamura, Hiroki

    2005-02-01

    An effective scheme is proposed for the laser control of wave packet dynamics. It is demonstrated that by using specially designed quadratically chirped pulses, fast and nearly complete excitation of wave packet can be achieved without significant distortion of its shape. The parameters of the laser pulse can be estimated analytically from the Zhu-Nakamura theory of nonadiabatic transition. If the wave packet is not too narrow or not too broad, then the scheme is expected to be utilizable for multidimensional systems. The scheme is applicable to various processes such as simple electronic excitation, pump-dump, and selective bond breaking, and it is actually numerically demonstrated to work well by taking diatomic and triatomic molecules (LiH, NaK, H2O) as examples.

  8. Thermal resistance of etched-pillar vertical-cavity surface-emitting laser diodes

    NASA Astrophysics Data System (ADS)

    Wipiejewski, Torsten; Peters, Matthew G.; Young, D. Bruce; Thibeault, Brian; Fish, Gregory A.; Coldren, Larry A.

    1996-03-01

    We discuss our measurements on thermal impedance and thermal crosstalk of etched-pillar vertical-cavity lasers and laser arrays. The average thermal conductivity of AlAs-GaAs Bragg reflectors is estimated to be 0.28 W/(cmK) and 0.35W/(cmK) for the transverse and lateral direction, respectively. Lasers with a Au-plated heat spreading layer exhibit a 50% lower thermal impedance compared to standard etched-pillar devices resulting in a significant increase of maximum output power. For an unmounted laser of 64 micrometer diameter we obtain an improvement in output power from 20 mW to 42 mW. The experimental results are compared with a simple analytical model showing the importance of heat sinking for maximizing the output power of vertical-cavity lasers.

  9. Numerical prediction of the energy efficiency of the three-dimensional fish school using the discretized Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Lin, Yinwei

    2018-06-01

    A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.

  10. Approaching near real-time biosensing: microfluidic microsphere based biosensor for real-time analyte detection.

    PubMed

    Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania

    2015-04-15

    In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  12. Correlating locations in ipsilateral breast tomosynthesis views using an analytical hemispherical compression model

    NASA Astrophysics Data System (ADS)

    van Schie, Guido; Tanner, Christine; Snoeren, Peter; Samulski, Maurice; Leifland, Karin; Wallis, Matthew G.; Karssemeijer, Nico

    2011-08-01

    To improve cancer detection in mammography, breast examinations usually consist of two views per breast. In order to combine information from both views, corresponding regions in the views need to be matched. In 3D digital breast tomosynthesis (DBT), this may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. For multiview computer-aided detection (CAD) systems, matching corresponding regions is an essential step that needs to be automated. In this study, we developed an automatic method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a spatial transformation. First we match a model of a compressed breast to the tomosynthesis view containing a point of interest. Then we estimate the location of the corresponding point in the ipsilateral view by assuming that this model was decompressed, rotated and compressed again. In this study, we use a relatively simple, elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. We investigate three different methods to match the compression model to the data by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation, we annotated 208 landmarks in both views of a total of 146 imaged breasts of 109 different patients and applied our method to each location. The best results are obtained by using the centre of gravity of the breast to define the central axis of the model, around which the breast is assumed to rotate between views. Results show a median 3D distance between the actual location and the estimated location of 14.6 mm, a good starting point for a registration method or a feature-based local search method to link suspicious regions in a multiview CAD system. Approximately half of the estimated locations are at most one slice away from the actual location, which makes the method useful as a mammographic workstation tool for radiologists to interactively find corresponding locations in ipsilateral tomosynthesis views.

  13. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  14. Automation effects in a multiloop manual control system

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1986-01-01

    An experimental and analytical study was undertaken to investigate human interaction with a simple multiloop manual control system in which the human's activity was systematically varied by changing the level of automation. The system simulated was the longitudinal dynamics of a hovering helicopter. The automation-systems-stabilized vehicle responses from attitude to velocity to position and also provided for display automation in the form of a flight director. The control-loop structure resulting from the task definition can be considered a simple stereotype of a hierarchical control system. The experimental study was complemented by an analytical modeling effort which utilized simple crossover models of the human operator. It was shown that such models can be extended to the description of multiloop tasks involving preview and precognitive human operator behavior. The existence of time optimal manual control behavior was established for these tasks and the role which internal models may play in establishing human-machine performance was discussed.

  15. Spectral properties of thermal fluctuations on simple liquid surfaces below shot-noise levels.

    PubMed

    Aoki, Kenichiro; Mitsui, Takahisa

    2012-07-01

    We study the spectral properties of thermal fluctuations on simple liquid surfaces, sometimes called ripplons. Analytical properties of the spectral function are investigated and are shown to be composed of regions with simple analytic behavior with respect to the frequency or the wave number. The derived expressions are compared to spectral measurements performed orders of magnitude below shot-noise levels, which is achieved using a novel noise reduction method. The agreement between the theory of thermal surface fluctuations and the experiment is found to be excellent, elucidating the spectral properties of the surface fluctuations. The measurement method requires relatively only a small sample both spatially (few μm) and temporally (~20 s). The method also requires relatively weak light power (~0.5 mW) so that it has a broad range of applicability, including local measurements, investigations of time-dependent phenomena, and noninvasive measurements.

  16. Experimental evaluation of expendable supersonic nozzle concepts

    NASA Technical Reports Server (NTRS)

    Baker, V.; Kwon, O.; Vittal, B.; Berrier, B.; Re, R.

    1990-01-01

    Exhaust nozzles for expendable supersonic turbojet engine missile propulsion systems are required to be simple, short and compact, in addition to having good broad-range thrust-minus-drag performance. A series of convergent-divergent nozzle scale model configurations were designed and wind tunnel tested for a wide range of free stream Mach numbers and nozzle pressure ratios. The models included fixed geometry and simple variable exit area concepts. The experimental and analytical results show that the fixed geometry configurations tested have inferior off-design thrust-minus-drag performance in the transonic Mach range. A simple variable exit area configuration called the Axi-Quad nozzle, combining features of both axisymmetric and two-dimensional convergent-divergent nozzles, performed well over a broad range of operating conditions. Analytical predictions of the flow pattern as well as overall performance of the nozzles, using a fully viscous, compressible CFD code, compared very well with the test data.

  17. Implementing a Matrix-free Analytical Jacobian to Handle Nonlinearities in Models of 3D Lithospheric Deformation

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Popov, A.

    2015-12-01

    The analytical expression for the Jacobian is a key component to achieve fast and robust convergence of the nonlinear Newton-Raphson iterative solver. Accomplishing this task in practice often requires a significant algebraic effort. Therefore it is quite common to use a cheap alternative instead, for example by approximating the Jacobian with a finite difference estimation. Despite its simplicity it is a relatively fragile and unreliable technique that is sensitive to the scaling of the residual and unknowns, as well as to the perturbation parameter selection. Unfortunately no universal rule can be applied to provide both a robust scaling and a perturbation. The approach we use here is to derive the analytical Jacobian for the coupled set of momentum, mass, and energy conservation equations together with the elasto-visco-plastic rheology and a marker in cell/staggered finite difference method. The software project LaMEM (Lithosphere and Mantle Evolution Model) is primarily developed for the thermo-mechanically coupled modeling of the 3D lithospheric deformation. The code is based on a staggered grid finite difference discretization in space, and uses customized scalable solvers form PETSc library to efficiently run on the massively parallel machines (such as IBM Blue Gene/Q). Currently LaMEM relies on the Jacobian-Free Newton-Krylov (JFNK) nonlinear solver, which approximates the Jacobian-vector product using a simple finite difference formula. This approach never requires an assembled Jacobian matrix and uses only the residual computation routine. We use an approximate Jacobian (Picard) matrix to precondition the Krylov solver with the Galerkin geometric multigrid. Because of the inherent problems of the finite difference Jacobian estimation, this approach doesn't always result in stable convergence. In this work we present and discuss a matrix-free technique in which the Jacobian-vector product is replaced by analytically-derived expressions and compare results with those obtained with a finite difference approximation of the Jacobian. This project is funded by ERC Starting Grant 258830 and computer facilities were provided by Jülich supercomputer center (Germany).

  18. Comparing convective heat fluxes derived from thermodynamics to a radiative-convective model and GCMs

    NASA Astrophysics Data System (ADS)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2015-04-01

    The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.

  19. Analytical Computation of Energy-Energy Correlation at Next-to-Leading Order in QCD

    NASA Astrophysics Data System (ADS)

    Dixon, Lance J.; Luo, Ming-xing; Shtabovenko, Vladyslav; Yang, Tong-Zhi; Zhu, Hua Xing

    2018-03-01

    The energy-energy correlation (EEC) between two detectors in e+e- annihilation was computed analytically at leading order in QCD almost 40 years ago, and numerically at next-to-leading order (NLO) starting in the 1980s. We present the first analytical result for the EEC at NLO, which is remarkably simple, and facilitates analytical study of the perturbative structure of the EEC. We provide the expansion of the EEC in the collinear and back-to-back regions through next-to-leading power, information which should aid resummation in these regions.

  20. 3-D discrete analytical ridgelet transform.

    PubMed

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  1. Galaxy–galaxy lensing estimators and their covariance properties

    DOE PAGES

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...

    2017-07-21

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  2. Galaxy–galaxy lensing estimators and their covariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  3. Galaxy-galaxy lensing estimators and their covariance properties

    NASA Astrophysics Data System (ADS)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  4. A computational method for optimizing fuel treatment locations

    Treesearch

    Mark A. Finney

    2006-01-01

    Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...

  5. Transport of a decay chain in homogenous porous media: analytical solutions.

    PubMed

    Bauer, P; Attinger, S; Kinzelbach, W

    2001-06-01

    With the aid of integral transforms, analytical solutions for the transport of a decay chain in homogenous porous media are derived. Unidirectional steady-state flow and radial steady-state flow in single and multiple porosity media are considered. At least in Laplace domain, all solutions can be written in closed analytical formulae. Partly, the solutions can also be inverted analytically. If not, analytical calculation of the steady-state concentration distributions, evaluation of temporal moments and numerical inversion are still possible. Formulae for several simple boundary conditions are given and visualized in this paper. The derived novel solutions are widely applicable and are very useful for the validation of numerical transport codes.

  6. Relativistic Light Sails

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kipping, David, E-mail: dkipping@astro.columbia.edu

    One proposed method for spacecraft to reach nearby stars is by accelerating sails using either solar radiation pressure or directed energy. This idea constitutes the thesis behind the Breakthrough Starshot project, which aims to accelerate a gram-mass spacecraft up to one-fifth the speed of light toward Proxima Centauri. For such a case, the combination of the sail’s low mass and relativistic velocity renders previous treatments incorrect at the 10% level, including that of Einstein himself in his seminal 1905 paper introducing special relativity. To address this, we present formulae for a sail’s acceleration, first in response to a single photonmore » and then extended to an ensemble. We show how the sail’s motion in response to an ensemble of incident photons is equivalent to that of a single photon of energy equal to that of the ensemble. We use this principle of ensemble equivalence for both perfect and imperfect mirrors, enabling a simple analytic prediction of the sail’s velocity curve. Using our results and adopting putative parameters for Starshot , we estimate that previous relativistic treatments underestimate the spacecraft’s terminal velocity by ∼10% for the same incident energy. Additionally, we use a simple model to predict the sail’s temperature and diffraction beam losses during the laser firing period; this allows us to estimate that, for firing times of a few minutes and operating temperatures below 300°C (573 K), Starshot will require a sail that absorbs less than one in 260,000 photons.« less

  7. Relativistic Light Sails

    NASA Astrophysics Data System (ADS)

    Kipping, David

    2017-06-01

    One proposed method for spacecraft to reach nearby stars is by accelerating sails using either solar radiation pressure or directed energy. This idea constitutes the thesis behind the Breakthrough Starshot project, which aims to accelerate a gram-mass spacecraft up to one-fifth the speed of light toward Proxima Centauri. For such a case, the combination of the sail’s low mass and relativistic velocity renders previous treatments incorrect at the 10% level, including that of Einstein himself in his seminal 1905 paper introducing special relativity. To address this, we present formulae for a sail’s acceleration, first in response to a single photon and then extended to an ensemble. We show how the sail’s motion in response to an ensemble of incident photons is equivalent to that of a single photon of energy equal to that of the ensemble. We use this principle of ensemble equivalence for both perfect and imperfect mirrors, enabling a simple analytic prediction of the sail’s velocity curve. Using our results and adopting putative parameters for Starshot, we estimate that previous relativistic treatments underestimate the spacecraft’s terminal velocity by ∼10% for the same incident energy. Additionally, we use a simple model to predict the sail’s temperature and diffraction beam losses during the laser firing period; this allows us to estimate that, for firing times of a few minutes and operating temperatures below 300°C (573 K), Starshot will require a sail that absorbs less than one in 260,000 photons.

  8. Local Spatial Obesity Analysis and Estimation Using Online Social Network Sensors.

    PubMed

    Sun, Qindong; Wang, Nan; Li, Shancang; Zhou, Hongyi

    2018-03-15

    Recently, the online social networks (OSNs) have received considerable attentions as a revolutionary platform to offer users massive social interaction among users that enables users to be more involved in their own healthcare. The OSNs have also promoted increasing interests in the generation of analytical, data models in health informatics. This paper aims at developing an obesity identification, analysis, and estimation model, in which each individual user is regarded as an online social network 'sensor' that can provide valuable health information. The OSN-based obesity analytic model requires each sensor node in an OSN to provide associated features, including dietary habit, physical activity, integral/incidental emotions, and self-consciousness. Based on the detailed measurements on the correlation of obesity and proposed features, the OSN obesity analytic model is able to estimate the obesity rate in certain urban areas and the experimental results demonstrate a high success estimation rate. The measurements and estimation experimental findings created by the proposed obesity analytic model show that the online social networks could be used in analyzing the local spatial obesity problems effectively. Copyright © 2018. Published by Elsevier Inc.

  9. A Comprehensive Analytical Solution of the Nonlinear Pendulum

    ERIC Educational Resources Information Center

    Ochs, Karlheinz

    2011-01-01

    In this paper, an analytical solution for the differential equation of the simple but nonlinear pendulum is derived. This solution is valid for any time and is not limited to any special initial instance or initial values. Moreover, this solution holds if the pendulum swings over or not. The method of approach is based on Jacobi elliptic functions…

  10. Effect of train carbody's parameters on vertical bending stiffness performance

    NASA Astrophysics Data System (ADS)

    Yang, Guangwu; Wang, Changke; Xiang, Futeng; Xiao, Shoune

    2016-10-01

    Finite element analysis(FEA) and modal test are main methods to give the first-order vertical bending vibration frequency of train carbody at present, but they are inefficiency and waste plenty of time. Based on Timoshenko beam theory, the bending deformation, moment of inertia and shear deformation are considered. Carbody is divided into some parts with the same length, and it's stiffness is calculated with series principle, it's cross section area, moment of inertia and shear shape coefficient is equivalent by segment length, and the fimal corrected first-order vertical bending vibration frequency analytical formula is deduced. There are 6 simple carbodies and 1 real carbody as examples to test the formula, all analysis frequencies are very close to their FEA frequencies, and especially for the real carbody, the error between analysis and experiment frequency is 0.75%. Based on the analytic formula, sensitivity analysis of the real carbody's design parameters is done, and some main parameters are found. The series principle of carbody stiffness is introduced into Timoshenko beam theory to deduce a formula, which can estimate the first-order vertical bending vibration frequency of carbody quickly without traditional FEA method and provide a reference to design engineers.

  11. Virial Coefficients and Equations of State for Hard Polyhedron Fluids.

    PubMed

    Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C

    2017-10-24

    Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.

  12. Rapid and sensitive determination of tellurium in soil and plant samples by sector-field inductively coupled plasma mass spectrometry.

    PubMed

    Yang, Guosheng; Zheng, Jian; Tagami, Keiko; Uchida, Shigeo

    2013-11-15

    In this work, we report a rapid and highly sensitive analytical method for the determination of tellurium in soil and plant samples using sector field inductively coupled plasma mass spectrometry (SF-ICP-MS). Soil and plant samples were digested using Aqua regia. After appropriate dilution, Te in soil and plant samples was directly analyzed without any separation and preconcentration. This simple sample preparation approach avoided to a maximum extent any contamination and loss of Te prior to the analysis. The developed analytical method was validated by the analysis of soil/sediment and plant reference materials. Satisfactory detection limits of 0.17 ng g(-1) for soil and 0.02 ng g(-1) for plant samples were achieved, which meant that the developed method was applicable to studying the soil-to-plant transfer factor of Te. Our work represents for the first time that data on the soil-to-plant transfer factor of Te were obtained for Japanese samples which can be used for the estimation of internal radiation dose of radioactive tellurium due to the Fukushima Daiichi Nuclear Power Plant accident. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  14. Analysis of pultrusion processing for long fiber reinforced thermoplastic composite system

    NASA Technical Reports Server (NTRS)

    Tso, W.; Hou, T. H.; Tiwari, S. N.

    1993-01-01

    Pultrusion is one of the composite processing technology, commonly recognized as a simple and cost-effective means for the manufacturing of fiber-reinforced, resin matrix composite parts with different regular geometries. Previously, because the majority of the pultruded composite parts were made of thermosetting resin matrix, emphasis of the analysis on the process has been on the conservation of energy from various sources, such as heat conduction and the curing kinetics of the resin system. Analysis on the flow aspect of the process was almost absent in the literature for thermosetting process. With the increasing uses of thermoplastic materials, it is desirable to obtain the detailed velocity and pressure profiles inside the pultrusion die. Using a modified Darcy's law for flow through porous media, closed form analytical solutions for the velocity and pressure distributions inside the pultrusion die are obtained for the first time. This enables us to estimate the magnitude of viscous dissipation and it's effects on the pultruded parts. Pulling forces refined in the pultrusion processing are also analyzed. The analytical model derived in this study can be used to advance our knowledge and control of the pultrusion process for fiber reinforced thermoplastic composite parts.

  15. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates

    EPA Science Inventory

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concent...

  16. Railroads and the Environment : Estimation of Fuel Consumption in Rail Transportation : Volume 1. Analytical Model

    DOT National Transportation Integrated Search

    1975-05-01

    The report describes an analytical approach to estimation of fuel consumption in rail transportation, and provides sample computer calculations suggesting the sensitivity of fuel usage to various parameters. The model used is based upon careful delin...

  17. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. A Simple Sonication Improves Protein Signal in Matrix-Assisted Laser Desorption Ionization Imaging

    NASA Astrophysics Data System (ADS)

    Lin, Li-En; Su, Pin-Rui; Wu, Hsin-Yi; Hsu, Cheng-Chih

    2018-02-01

    Proper matrix application is crucial in obtaining high quality matrix-assisted laser desorption ionization (MALDI) mass spectrometry imaging (MSI). Solvent-free sublimation was essentially introduced as an approach of homogeneous coating that gives small crystal size of the organic matrix. However, sublimation has lower extraction efficiency of analytes. Here, we present that a simple sonication step after the hydration in standard sublimation protocol significantly enhances the sensitivity of MALDI MSI. This modified procedure uses a common laboratory ultrasonicator to immobilize the analytes from tissue sections without noticeable delocalization. Improved imaging quality with additional peaks above 10 kDa in the spectra was thus obtained upon sonication treatment. [Figure not available: see fulltext.

  19. Electrochemistry and analytical determination of lysergic acid diethylamide (LSD) via adsorptive stripping voltammetry.

    PubMed

    Merli, Daniele; Zamboni, Daniele; Protti, Stefano; Pesavento, Maria; Profumo, Antonella

    2014-12-01

    Lysergic acid diethylamide (LSD) is hardly detectable and quantifiable in biological samples because of its low active dose. Although several analytical tests are available, routine analysis of this drug is rarely performed. In this article, we report a simple and accurate method for the determination of LSD, based on adsorptive stripping voltammetry in DMF/tetrabutylammonium perchlorate, with a linear range of 1-90 ng L(-1) for deposition times of 50s. LOD of 1.4 ng L(-1) and LOQ of 4.3 ng L(-1) were found. The method can be also applied to biological samples after a simple extraction with 1-chlorobutane. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Ground temperature measurement by PRT-5 for maps experiment

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.

  1. Collector modulation in high-voltage bipolar transistor in the saturation mode: Analytical approach

    NASA Astrophysics Data System (ADS)

    Dmitriev, A. P.; Gert, A. V.; Levinshtein, M. E.; Yuferev, V. S.

    2018-04-01

    A simple analytical model is developed, capable of replacing the numerical solution of a system of nonlinear partial differential equations by solving a simple algebraic equation when analyzing the collector resistance modulation of a bipolar transistor in the saturation mode. In this approach, the leakage of the base current into the emitter and the recombination of non-equilibrium carriers in the base are taken into account. The data obtained are in good agreement with the results of numerical calculations and make it possible to describe both the motion of the front of the minority carriers and the steady state distribution of minority carriers across the collector in the saturation mode.

  2. Simple and Sensitive Paper-Based Device Coupling Electrochemical Sample Pretreatment and Colorimetric Detection.

    PubMed

    Silva, Thalita G; de Araujo, William R; Muñoz, Rodrigo A A; Richter, Eduardo M; Santana, Mário H P; Coltro, Wendell K T; Paixão, Thiago R L C

    2016-05-17

    We report the development of a simple, portable, low-cost, high-throughput visual colorimetric paper-based analytical device for the detection of procaine in seized cocaine samples. The interference of most common cutting agents found in cocaine samples was verified, and a novel electrochemical approach was used for sample pretreatment in order to increase the selectivity. Under the optimized experimental conditions, a linear analytical curve was obtained for procaine concentrations ranging from 5 to 60 μmol L(-1), with a detection limit of 0.9 μmol L(-1). The accuracy of the proposed method was evaluated using seized cocaine samples and an addition and recovery protocol.

  3. SU-FF-T-668: A Simple Algorithm for Range Modulation Wheel Design in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, X; Nazaryan, Vahagn; Gueye, Paul

    2009-06-01

    Purpose: To develop a simple algorithm in designing the range modulation wheel to generate a very smooth Spread-Out Bragg peak (SOBP) for proton therapy.Method and Materials: A simple algorithm has been developed to generate the weight factors in corresponding pristine Bragg peaks which composed a smooth SOBP in proton therapy. We used a modified analytical Bragg peak function based on Monte Carol simulation tool-kits of Geant4 as pristine Bragg peaks input in our algorithm. A simple METLAB(R) Quad Program was introduced to optimize the cost function in our algorithm. Results: We found out that the existed analytical function of Braggmore » peak can't directly use as pristine Bragg peak dose-depth profile input file in optimization of the weight factors since this model didn't take into account of the scattering factors introducing from the range shifts in modifying the proton beam energies. We have done Geant4 simulations for proton energy of 63.4 MeV with a 1.08 cm SOBP for variation of pristine Bragg peaks which composed this SOBP and modified the existed analytical Bragg peak functions for their peak heights, ranges of R{sub 0}, and Gaussian energies {sigma}{sub E}. We found out that 19 pristine Bragg peaks are enough to achieve a flatness of 1.5% of SOBP which is the best flatness in the publications. Conclusion: This work develops a simple algorithm to generate the weight factors which is used to design a range modulation wheel to generate a smooth SOBP in protonradiation therapy. We have found out that a medium number of pristine Bragg peaks are enough to generate a SOBP with flatness less than 2%. It is potential to generate data base to store in the treatment plan to produce a clinic acceptable SOBP by using our simple algorithm.« less

  4. Consequences of tidal interaction between disks and orbiting protoplanets for the evolution of multi-planet systems with architecture resembling that of Kepler 444

    NASA Astrophysics Data System (ADS)

    Papaloizou, J. C. B.

    2016-11-01

    We study orbital evolution of multi-planet systems with masses in the terrestrial planet regime induced through tidal interaction with a protoplanetary disk assuming that this is the dominant mechanism for producing orbital migration and circularization. We develop a simple analytic model for a system that maintains consecutive pairs in resonance while undergoing orbital circularization and migration. This model enables migration times for each planet to be estimated once planet masses, circularization times and the migration time for the innermost planet are specified. We applied it to a system with the current architecture of Kepler 444 adopting a simple protoplanetary disk model and planet masses that yield migration times inversely proportional to the planet mass, as expected if they result from torques due to tidal interaction with the protoplanetary disk. Furthermore the evolution time for the system as a whole is comparable to current protoplanetary disk lifetimes. In addition we have performed a number of numerical simulations with input data obtained from this model. These indicate that although the analytic model is inexact, relatively small corrections to the estimated migration rates yield systems for which period ratios vary by a minimal extent. Because of relatively large deviations from exact resonance in the observed system of up to 2 %, the migration times obtained in this way indicate only weak convergent migration such that a system for which the planets did not interact would contract by only {˜ }1 % although undergoing significant inward migration as a whole. We have also performed additional simulations to investigate conditions under which the system could undergo significant convergent migration before reaching its final state. These indicate that migration times have to be significantly shorter and resonances between planet pairs significantly closer during such an evolutionary phase. Relative migration rates would then have to decrease allowing period ratios to increase to become more distant from resonances as the system approached its final state in the inner regions of the protoplanetary disk.

  5. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  6. ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT

    EPA Science Inventory

    The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...

  7. Estimating Aquifer Properties Using Sinusoidal Pumping Tests

    NASA Astrophysics Data System (ADS)

    Rasmussen, T. C.; Haborak, K. G.; Young, M. H.

    2001-12-01

    We develop the theoretical and applied framework for using sinusoidal pumping tests to estimate aquifer properties for confined, leaky, and partially penetrating conditions. The framework 1) derives analytical solutions for three boundary conditions suitable for many practical applications, 2) validates the analytical solutions against a finite element model, 3) establishes a protocol for conducting sinusoidal pumping tests, and 4) estimates aquifer hydraulic parameters based on the analytical solutions. The analytical solutions to sinusoidal stimuli in radial coordinates are derived for boundary value problems that are analogous to the Theis (1935) confined aquifer solution, the Hantush and Jacob (1955) leaky aquifer solution, and the Hantush (1964) partially penetrated confined aquifer solution. The analytical solutions compare favorably to a finite-element solution of a simulated flow domain, except in the region immediately adjacent to the pumping well where the implicit assumption of zero borehole radius is violated. The procedure is demonstrated in one unconfined and two confined aquifer units near the General Separations Area at the Savannah River Site, a federal nuclear facility located in South Carolina. Aquifer hydraulic parameters estimated using this framework provide independent confirmation of parameters obtained from conventional aquifer tests. The sinusoidal approach also resulted in the elimination of investigation-derived wastes.

  8. An analytical fiber bundle model for pullout mechanics of root bundles

    NASA Astrophysics Data System (ADS)

    Cohen, D.; Schwarz, M.; Or, D.

    2011-09-01

    Roots in soil contribute to the mechanical stability of slopes. Estimation of root reinforcement is challenging because roots form complex biological networks whose geometrical and mechanical characteristics are difficult to characterize. Here we describe an analytical model that builds on simple root descriptors to estimate root reinforcement. Root bundles are modeled as bundles of heterogeneous fibers pulled along their long axes neglecting root-soil friction. Analytical expressions for the pullout force as a function of displacement are derived. The maximum pullout force and corresponding critical displacement are either derived analytically or computed numerically. Key model inputs are a root diameter distribution (uniform, Weibull, or lognormal) and three empirical power law relations describing tensile strength, elastic modulus, and length of roots as functions of root diameter. When a root bundle with root tips anchored in the soil matrix is pulled by a rigid plate, a unique parameter, ?, that depends only on the exponents of the power law relations, dictates the order in which roots of different diameters break. If ? < 1, small roots break first; if ? > 1, large roots break first. When ? = 1, all fibers break simultaneously, and the maximum tensile force is simply the roots' mean force times the number of roots in the bundle. Based on measurements of root geometry and mechanical properties, the value of ? is less than 1, usually ranging between 0 and 0.7. Thus, small roots always fail first. The model shows how geometrical and mechanical characteristics of roots and root diameter distribution affect the pullout force, its maximum and corresponding displacement. Comparing bundles of roots that have similar mean diameters, a bundle with a narrow variance in root diameter will result in a larger maximum force and a smaller displacement at maximum force than a bundle with a wide diameter distribution. Increasing the mean root diameter of a bundle without changing the distribution's shape increases both the maximum force and corresponding displacement. Estimates of the maximum pullout forces for bundles of 100 roots with identical diameter distribution for different species range from less than 1 kN for barley (Hordeum vulgare) to almost 16 kN for pistachio (Pistacia lentiscus). The model explains why a commonly used assumption that all roots break simultaneously overpredicts the maximum pullout force by a factor of about 1.6-2. This ratio may exceed 3 for diameter distributions that have a large number of small roots like the exponential distribution.

  9. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  10. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  11. Comprehensive near infrared study of Jatropha oil esterification with ethanol for biodiesel production

    NASA Astrophysics Data System (ADS)

    Oliveira, Alianda Dantas de; Sá, Ananda Franco de; Pimentel, Maria Fernanda; Pacheco, José Geraldo A.; Pereira, Claudete Fernandes; Larrechi, Maria Soledad

    2017-01-01

    This work presents a comprehensive near infrared study for in-line monitoring of the esterification reaction of high acid oils, such as Jatropha curcas oil, using ethanol. Parallel reactions involved in the process were carried out to select a spectral region that characterizes the evolution of the esterification reaction. Using absorbance intensities at 5176 cm- 1, the conversion and kinetic behaviors of the esterification reaction were estimated. This method was applied to evaluate the influence of temperature and catalyst concentration on the estimates of initial reaction rate and ester conversion as responses to a 22 factorial experimental design. Employment of an alcohol/oil ratio of 16:1, catalyst concentration of 1.5% w/w, and temperatures at 65 °C or 75 °C, made it possible to reduce the initial acidity from 18% to 1.3% w/w, which is suitable for transesterification of high free fatty acid oils for biodiesel production. Using the proposed analytical method in the esterification reaction of raw materials with high free fatty acid content for biodiesel makes the monitoring process inexpensive, fast, simple, and practical.

  12. Quantifying the density of surface capping ligands on semiconductor quantum dots

    NASA Astrophysics Data System (ADS)

    Zhan, Naiqian; Palui, Goutam; Merkl, Jan-Philip; Mattoussi, Hedi

    2015-03-01

    We have designed a new set of coordinating ligands made of a lipoic acid (LA) anchor and poly(ethylene glycol) (PEG) hydrophilic moiety appended with a terminal aldehyde for the surface functionalization of QDs. This ligand design was combined with a recently developed photoligation strategy to prepare hydrophilic CdSe-ZnS QDs with good control over the fraction of intact aldehyde (-CHO) groups per nanocrystal. We further applied the efficient hydrazone ligation to react aldehyde-QDs with 2-hydrazinopyridine (2-HP). This covalent modification produces QD-conjugates with a well-defined absorption feature at 350 nm ascribed to the hydrazone chromophore. We exploited this unique optical signature to accurately measure the number of aldehyde groups per QD when the fraction of LA-PEG-CHO per nanocrystal was varied. This allowed us to extract an estimate for the number of LA-PEG ligands per QD. These results suggest that hydrazone ligation has the potential to provide a simple and general analytical method to estimate the number of surface ligands for a variety of nanocrystals such as metal, metal oxide and semiconductor nanocrystals.

  13. Development and validation of a HPTLC method for simultaneous estimation of lornoxicam and thiocolchicoside in combined dosage form.

    PubMed

    Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A; Raut, Rahul P; Choudhari, Vishnu P; Kuchekar, Bhanudas S

    2011-07-01

    To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60-360 ng/band for LOR and 30-180 ng/band for THIO with correlation coefficients r(2) = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7-101.2 %. The proposed method was optimized and validated as per the ICH guidelines.

  14. Vertical and pitching resonance of train cars moving over a series of simple beams

    NASA Astrophysics Data System (ADS)

    Yang, Y. B.; Yau, J. D.

    2015-02-01

    The resonant response, including both vertical and pitching motions, of an undamped sprung mass unit moving over a series of simple beams is studied by a semi-analytical approach. For a sprung mass that is very small compared with the beam, we first simplify the sprung mass as a constant moving force and obtain the response of the beam in closed form. With this, we then solve for the response of the sprung mass passing over a series of simple beams, and validate the solution by an independent finite element analysis. To evaluate the pitching resonance, we consider the cases of a two-axle model and a coach model traveling over rough rails supported by a series of simple beams. The resonance of a train car is characterized by the fact that its response continues to build up, as it travels over more and more beams. For train cars with long axle intervals, the vertical acceleration induced by pitching resonance dominates the peak response of the train traveling over a series of simple beams. The present semi-analytical study allows us to grasp the key parameters involved in the primary/sub-resonant responses. Other phenomena of resonance are also discussed in the exemplar study.

  15. Characterization, thermal stability studies, and analytical method development of Paromomycin for formulation development.

    PubMed

    Khan, Wahid; Kumar, Neeraj

    2011-06-01

    Paromomycin (PM) is an aminoglycoside antibiotic, first isolated in the 1950s, and approved in 2006 for treatment of visceral leishmaniasis. Although isolated six decades back, sufficient information essential for development of pharmaceutical formulation is not available for PM. The purpose of this paper was to determine thermal stability and development of new analytical method for formulation development of PM. PM was characterized by thermoanalytical (DSC, TGA, and HSM) and by spectroscopic (FTIR) techniques and these techniques were used to establish thermal stability of PM after heating PM at 100, 110, 120, and 130 °C for 24 h. Biological activity of these heated samples was also determined by microbiological assay. Subsequently, a simple, rapid and sensitive RP-HPLC method for quantitative determination of PM was developed using pre-column derivatization with 9-fluorenylmethyl chloroformate. The developed method was applied to estimate PM quantitatively in two parenteral dosage forms. PM was successfully characterized by various stated techniques. These techniques indicated stability of PM for heating up to 120 °C for 24 h, but when heated at 130 °C, PM is liable to degradation. This degradation is also observed in microbiological assay where PM lost ∼30% of its biological activity when heated at 130 °C for 24 h. New analytical method was developed for PM in the concentration range of 25-200 ng/ml with intra-day and inter-day variability of < 2%RSD. Characterization techniques were established and stability of PM was determined successfully. Developed analytical method was found sensitive, accurate, and precise for quantification of PM. Copyright © 2010 John Wiley & Sons, Ltd. Copyright © 2010 John Wiley & Sons, Ltd.

  16. GLONASS orbit/clock combination in VNIIFTRI

    NASA Astrophysics Data System (ADS)

    Bezmenov, I.; Pasynok, S.

    2015-08-01

    An algorithm and a program for GLONASS satellites orbit/clock combination based on daily precise orbits submitted by several Analytic Centers were developed. Some theoretical estimates for combine orbit positions RMS were derived. It was shown that under condition that RMS of satellite orbits provided by the Analytic Centers during a long time interval are commensurable the RMS of combine orbit positions is no greater than RMS of other satellite positions estimated by any of the Analytic Centers.

  17. OpenACC directive-based GPU acceleration of an implicit reconstructed discontinuous Galerkin method for compressible flows on 3D unstructured grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lou, Jialin; Xia, Yidong; Luo, Lixiang

    2016-09-01

    In this study, we use a combination of modeling techniques to describe the relationship between fracture radius that might be accomplished in a hypothetical enhanced geothermal system (EGS) and drilling distance required to create and access those fractures. We use a combination of commonly applied analytical solutions for heat transport in parallel fractures and 3D finite-element method models of more realistic heat extraction geometries. For a conceptual model involving multiple parallel fractures developed perpendicular to an inclined or horizontal borehole, calculations demonstrate that EGS will likely require very large fractures, of greater than 300 m radius, to keep interfracture drillingmore » distances to ~10 km or less. As drilling distances are generally inversely proportional to the square of fracture radius, drilling costs quickly escalate as the fracture radius decreases. It is important to know, however, whether fracture spacing will be dictated by thermal or mechanical considerations, as the relationship between drilling distance and number of fractures is quite different in each case. Information about the likelihood of hydraulically creating very large fractures comes primarily from petroleum recovery industry data describing hydraulic fractures in shale. Those data suggest that fractures with radii on the order of several hundred meters may, indeed, be possible. The results of this study demonstrate that relatively simple calculations can be used to estimate primary design constraints on a system, particularly regarding the relationship between generated fracture radius and the total length of drilling needed in the fracture creation zone. Comparison of the numerical simulations of more realistic geometries than addressed in the analytical solutions suggest that simple proportionalities can readily be derived to relate a particular flow field.« less

  18. CMB ISW-lensing bispectrum from cosmic strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamauchi, Daisuke; Sendouda, Yuuiti; Takahashi, Keitaro, E-mail: yamauchi@resceu.s.u-tokyo.ac.jp, E-mail: sendouda@cc.hirosaki-u.ac.jp, E-mail: keitaro@sci.kumamoto-u.ac.jp

    2014-02-01

    We study the effect of weak lensing by cosmic (super-)strings on the higher-order statistics of the cosmic microwave background (CMB). A cosmic string segment is expected to cause weak lensing as well as an integrated Sachs-Wolfe (ISW) effect, the so-called Gott-Kaiser-Stebbins (GKS) effect, to the CMB temperature fluctuation, which are thus naturally cross-correlated. We point out that, in the presence of such a correlation, yet another kind of the post-recombination CMB temperature bispectra, the ISW-lensing bispectra, will arise in the form of products of the auto- and cross-power spectra. We first present an analytic method to calculate the autocorrelation ofmore » the temperature fluctuations induced by the strings, and the cross-correlation between the temperature fluctuation and the lensing potential both due to the string network. In our formulation, the evolution of the string network is assumed to be characterized by the simple analytic model, the velocity-dependent one scale model, and the intercommutation probability is properly incorporated in order to characterize the possible superstringy nature. Furthermore, the obtained power spectra are dominated by the Poisson-distributed string segments, whose correlations are assumed to satisfy the simple relations. We then estimate the signal-to-noise ratios of the string-induced ISW-lensing bispectra and discuss the detectability of such CMB signals from the cosmic string network. It is found that in the case of the smaller string tension, Gμ << 10{sup -7}, the ISW-lensing bispectrum induced by a cosmic string network can constrain the string-model parameters even more tightly than the purely GKS-induced bispectrum in the ongoing and future CMB observations on small scales.« less

  19. Quantification of bupivacaine hydrochloride and isoflupredone acetate residues in porcine muscle, beef, milk, egg, shrimp, flatfish, and eel using a simplified extraction method coupled with liquid chromatography-triple quadrupole tandem mass spectrometry.

    PubMed

    Cho, Sang-Hyun; Park, Jin-A; Zheng, Weijia; Abd El-Aty, A M; Kim, Seong-Kwan; Choi, Jeong-Min; Yi, Hee; Cho, Soo-Min; Afifi, Nehal A; Shim, Jae-Han; Chang, Byung-Joon; Kim, Jin-Suk; Shin, Ho-Chul

    2017-10-15

    In this study, a simple analytical approach has been developed and validated for the determination of bupivacaine hydrochloride and isoflupredone acetate residues in porcine muscle, beef, milk, egg, shrimp, flatfish, and eel using liquid chromatography-tandem mass spectrometry (LC-MS/MS). A 0.1% solution of acetic acid in acetonitrile combined with n-hexane was used for deproteinization and defatting of all tested matrices and the target drugs were well separated on a Waters Xbridge™ C18 analytical column using a mobile phase consisting of 0.1% acetic acid (A) and 0.1% solution of acetic acid in methanol (B). The linearity estimated from six-point matrix-matched calibrations was good, with coefficients of determination ≥0.9873. The limits of quantification (LOQs) for bupivacaine hydrochloride and isoflupredone acetate were 1 and 2ngg -1 , respectively. Recovery percentages in the ranges of 72.51-112.39% (bupivacaine hydrochloride) and 72.58-114.56% (isoflupredone acetate) were obtained from three different fortification concentrations with relative standard deviations (RSDs) of <15.14%. All samples for the experimental work and method application were collected from the local markets in Seoul, Republic of Korea, and none of them tested positive for the target drugs. In conclusion, a simple method using a 0.1% solution of acetic acid in acetonitrile and n-hexane followed by LC-MS/MS could effectively extract bupivacaine hydrochloride and isoflupredone acetate from porcine muscle, beef, milk, egg, shrimp, flatfish, and eel samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. CMB ISW-lensing bispectrum from cosmic strings

    NASA Astrophysics Data System (ADS)

    Yamauchi, Daisuke; Sendouda, Yuuiti; Takahashi, Keitaro

    2014-02-01

    We study the effect of weak lensing by cosmic (super-)strings on the higher-order statistics of the cosmic microwave background (CMB). A cosmic string segment is expected to cause weak lensing as well as an integrated Sachs-Wolfe (ISW) effect, the so-called Gott-Kaiser-Stebbins (GKS) effect, to the CMB temperature fluctuation, which are thus naturally cross-correlated. We point out that, in the presence of such a correlation, yet another kind of the post-recombination CMB temperature bispectra, the ISW-lensing bispectra, will arise in the form of products of the auto- and cross-power spectra. We first present an analytic method to calculate the autocorrelation of the temperature fluctuations induced by the strings, and the cross-correlation between the temperature fluctuation and the lensing potential both due to the string network. In our formulation, the evolution of the string network is assumed to be characterized by the simple analytic model, the velocity-dependent one scale model, and the intercommutation probability is properly incorporated in order to characterize the possible superstringy nature. Furthermore, the obtained power spectra are dominated by the Poisson-distributed string segments, whose correlations are assumed to satisfy the simple relations. We then estimate the signal-to-noise ratios of the string-induced ISW-lensing bispectra and discuss the detectability of such CMB signals from the cosmic string network. It is found that in the case of the smaller string tension, Gμ << 10-7, the ISW-lensing bispectrum induced by a cosmic string network can constrain the string-model parameters even more tightly than the purely GKS-induced bispectrum in the ongoing and future CMB observations on small scales.

  1. Ultramicroelectrode Array Based Sensors: A Promising Analytical Tool for Environmental Monitoring

    PubMed Central

    Orozco, Jahir; Fernández-Sánchez, César; Jiménez-Jorquera, Cecilia

    2010-01-01

    The particular analytical performance of ultramicroelectrode arrays (UMEAs) has attracted a high interest by the research community and has led to the development of a variety of electroanalytical applications. UMEA-based approaches have demonstrated to be powerful, simple, rapid and cost-effective analytical tools for environmental analysis compared to available conventional electrodes and standardised analytical techniques. An overview of the fabrication processes of UMEAs, their characterization and applications carried out by the Spanish scientific community is presented. A brief explanation of theoretical aspects that highlight their electrochemical behavior is also given. Finally, the applications of this transducer platform in the environmental field are discussed. PMID:22315551

  2. Uncertainty Estimation for the Determination of Ni, Pb and Al in Natural Water Samples by SPE-ICP-OES

    NASA Astrophysics Data System (ADS)

    Ghorbani, A.; Farahani, M. Mahmoodi; Rabbani, M.; Aflaki, F.; Waqifhosain, Syed

    2008-01-01

    In this paper we propose uncertainty estimation for the analytical results we obtained from determination of Ni, Pb and Al by solidphase extraction and inductively coupled plasma optical emission spectrometry (SPE-ICP-OES). The procedure is based on the retention of analytes in the form of 8-hydroxyquinoline (8-HQ) complexes on a mini column of XAD-4 resin and subsequent elution with nitric acid. The influence of various analytical parameters including the amount of solid phase, pH, elution factors (concentration and volume of eluting solution), volume of sample solution, and amount of ligand on the extraction efficiency of analytes was investigated. To estimate the uncertainty of analytical result obtained, we propose assessing trueness by employing spiked sample. Two types of bias are calculated in the assessment of trueness: a proportional bias and a constant bias. We applied Nested design for calculating proportional bias and Youden method to calculate the constant bias. The results we obtained for proportional bias are calculated from spiked samples. In this case, the concentration found is plotted against the concentration added and the slop of standard addition curve is an estimate of the method recovery. Estimated method of average recovery in Karaj river water is: (1.004±0.0085) for Ni, (0.999±0.010) for Pb and (0.987±0.008) for Al.

  3. moocRP: Enabling Open Learning Analytics with an Open Source Platform for Data Distribution, Analysis, and Visualization

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin

    2016-01-01

    In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…

  4. Sample injection and electrophoretic separation on a simple laminated paper based analytical device.

    PubMed

    Xu, Chunxiu; Zhong, Minghua; Cai, Longfei; Zheng, Qingyu; Zhang, Xiaojun

    2016-02-01

    We described a strategy to perform multistep operations on a simple laminated paper-based separation device by using electrokinetic flow to manipulate the fluids. A laminated crossed-channel paper-based separation device was fabricated by cutting a filter paper sheet followed by lamination. Multiple function units including sample loading, sample injection, and electrophoretic separation were integrated on a single paper based analytical device for the first time, by applying potential at different reservoirs for sample, sample waste, buffer, and buffer waste. As a proof-of-concept demonstration, mixed sample solution containing carmine and sunset yellow were loaded in the sampling channel, and then injected into separation channel followed by electrophoretic separation, by adjusting the potentials applied at the four terminals of sampling and separation channel. The effects of buffer pH, buffer concentration, channel width, and separation time on resolution of electrophoretic separation were studied. This strategy may be used to perform multistep operations such as reagent dilution, sample injection, mixing, reaction, and separation on a single microfluidic paper based analytical device, which is very attractive for building micro total analysis systems on microfluidic paper based analytical devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Simple method for the determination of personal care product ingredients in lettuce by ultrasound-assisted extraction combined with solid-phase microextraction followed by GC-MS.

    PubMed

    Cabrera-Peralta, Jerónimo; Peña-Alvarez, Araceli

    2018-05-01

    A simple method for the simultaneous determination of personal care product ingredients: galaxolide, tonalide, oxybenzone, 4-methylbenzyliden camphor, padimate-o, 2-ethylhexyl methoxycinnamate, octocrylene, triclosan, and methyl triclosan in lettuce by ultrasound-assisted extraction combined with solid-phase microextraction followed by gas chromatography with mass spectrometry was developed. Lettuce was directly extracted by ultrasound-assisted extraction with methanol, this extract was combined with water, extracted by solid-phase microextraction in immersion mode, and analyzed by gas chromatography with mass spectrometry. Good linear relationships (25-250 ng/g, R 2  > 0.9702) and low detection limits (1.0-25 ng/g) were obtained for analytes along with acceptable precision for almost all analytes (RSDs < 20%). The validated method was applied for the determination of personal care product ingredients in commercial lettuce and lettuces grown in soil and irrigated with the analytes, identifying the target analytes in leaves and roots of the latter. This procedure is a miniaturized and environmentally friendly proposal which can be a useful tool for quality analysis in lettuce. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Modern analytical chemistry in the contemporary world

    NASA Astrophysics Data System (ADS)

    Šíma, Jan

    2016-12-01

    Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among sciences and in the contemporary world is discussed. Its interdisciplinary character and the necessity of the collaboration between analytical chemists and other experts in order to effectively solve the actual problems of the human society and the environment are emphasized. The importance of the analytical method validation in order to obtain the accurate and precise results is highlighted. The invalid results are not only useless; they can often be even fatal (e.g., in clinical laboratories). The curriculum of analytical chemistry at schools and universities is discussed. It is referred to be much broader than traditional equilibrium chemistry coupled with a simple description of individual analytical methods. Actually, the schooling of analytical chemistry should closely connect theory and practice.

  7. Validated HPLC-UV method for determination of naproxen in human plasma with proven selectivity against ibuprofen and paracetamol.

    PubMed

    Filist, Monika; Szlaska, Iwona; Kaza, Michał; Pawiński, Tomasz

    2016-06-01

    Estimating the influence of interfering compounds present in the biological matrix on the determination of an analyte is one of the most important tasks during bioanalytical method development and validation. Interferences from endogenous components and, if necessary, from major metabolites as well as possible co-administered medications should be evaluated during a selectivity test. This paper describes a simple, rapid and cost-effective HPLC-UV method for the determination of naproxen in human plasma in the presence of two other analgesics, ibuprofen and paracetamol. Sample preparation is based on a simple liquid-liquid extraction procedure with a short, 5 s mixing time. Fenoprofen, which is characterized by a similar structure and properties to naproxen, was first used as the internal standard. The calibration curve is linear in the concentration range of 0.5-80.0 µg/mL, which is suitable for pharmacokinetic studies following a single 220 mg oral dose of naproxen sodium. The method was fully validated according to international guidelines and was successfully applied in a bioequivalence study in humans. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  8. A simple energy filter for low energy electron microscopy/photoelectron emission microscopy instruments.

    PubMed

    Tromp, R M; Fujikawa, Y; Hannon, J B; Ellis, A W; Berghaus, A; Schaff, O

    2009-08-05

    Addition of an electron energy filter to low energy electron microscopy (LEEM) and photoelectron emission microscopy (PEEM) instruments greatly improves their analytical capabilities. However, such filters tend to be quite complex, both electron optically and mechanically. Here we describe a simple energy filter for the existing IBM LEEM/PEEM instrument, which is realized by adding a single scanning aperture slit to the objective transfer optics, without any further modifications to the microscope. This energy filter displays a very high energy resolution ΔE/E = 2 × 10(-5), and a non-isochromaticity of ∼0.5 eV/10 µm. The setup is capable of recording selected area electron energy spectra and angular distributions at 0.15 eV energy resolution, as well as energy filtered images with a 1.5 eV energy pass band at an estimated spatial resolution of ∼10 nm. We demonstrate the use of this energy filter in imaging and spectroscopy of surfaces using a laboratory-based He I (21.2 eV) light source, as well as imaging of Ag nanowires on Si(001) using the 4 eV energy loss Ag plasmon.

  9. On a Possible Unified Scaling Law for Volcanic Eruption Durations

    PubMed Central

    Cannavò, Flavio; Nunnari, Giuseppe

    2016-01-01

    Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour. PMID:26926425

  10. On a Possible Unified Scaling Law for Volcanic Eruption Durations.

    PubMed

    Cannavò, Flavio; Nunnari, Giuseppe

    2016-03-01

    Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour.

  11. Leakage and spillover effects of forest management on carbon storage: theoretical insights from a simple model

    NASA Astrophysics Data System (ADS)

    Magnani, Federico; Dewar, Roderick C.; Borghetti, Marco

    2009-04-01

    Leakage (spillover) refers to the unintended negative (positive) consequences of forest carbon (C) management in one area on C storage elsewhere. For example, the local C storage benefit of less intensive harvesting in one area may be offset, partly or completely, by intensified harvesting elsewhere in order to meet global timber demand. We present the results of a theoretical study aimed at identifying the key factors determining leakage and spillover, as a prerequisite for more realistic numerical studies. We use a simple model of C storage in managed forest ecosystems and their wood products to derive approximate analytical expressions for the leakage induced by decreasing the harvesting frequency of existing forest, and the spillover induced by establishing new plantations, assuming a fixed total wood production from local and remote (non-local) forests combined. We find that leakage and spillover depend crucially on the growth rates, wood product lifetimes and woody litter decomposition rates of local and remote forests. In particular, our results reveal critical thresholds for leakage and spillover, beyond which effects of forest management on remote C storage exceed local effects. Order of magnitude estimates of leakage indicate its potential importance at global scales.

  12. Building pit dewatering: application of transient analytic elements.

    PubMed

    Zaadnoordijk, Willem J

    2006-01-01

    Analytic elements are well suited for the design of building pit dewatering. Wells and drains can be modeled accurately by analytic elements, both nearby to determine the pumping level and at some distance to verify the targeted drawdown at the building site and to estimate the consequences in the vicinity. The ability to shift locations of wells or drains easily makes the design process very flexible. The temporary pumping has transient effects, for which transient analytic elements may be used. This is illustrated using the free, open-source, object-oriented analytic element simulator Tim(SL) for the design of a building pit dewatering near a canal. Steady calculations are complemented with transient calculations. Finally, the bandwidths of the results are estimated using linear variance analysis.

  13. Development and validation of a simple high-performance liquid chromatography analytical method for simultaneous determination of phytosterols, cholesterol and squalene in parenteral lipid emulsions.

    PubMed

    Novak, Ana; Gutiérrez-Zamora, Mercè; Domenech, Lluís; Suñé-Negre, Josep M; Miñarro, Montserrat; García-Montoya, Encarna; Llop, Josep M; Ticó, Josep R; Pérez-Lozano, Pilar

    2018-02-01

    A simple analytical method for simultaneous determination of phytosterols, cholesterol and squalene in lipid emulsions was developed owing to increased interest in their clinical effects. Method development was based on commonly used stationary (C 18 , C 8 and phenyl) and mobile phases (mixtures of acetonitrile, methanol and water) under isocratic conditions. Differences in stationary phases resulted in peak overlapping or coelution of different peaks. The best separation of all analyzed compounds was achieved on Zorbax Eclipse XDB C 8 (150 × 4.6 mm, 5 μm; Agilent) and ACN-H 2 O-MeOH, 80:19.5:0.5 (v/v/v). In order to achieve a shorter time of analysis, the method was further optimized and gradient separation was established. The optimized analytical method was validated and tested for routine use in lipid emulsion analyses. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Experimental Observation of a Current-Driven Instability in a Neutral Electron-Positron Beam.

    PubMed

    Warwick, J; Dzelzainis, T; Dieckmann, M E; Schumaker, W; Doria, D; Romagnani, L; Poder, K; Cole, J M; Alejo, A; Yeung, M; Krushelnick, K; Mangles, S P D; Najmudin, Z; Reville, B; Samarin, G M; Symes, D D; Thomas, A G R; Borghesi, M; Sarri, G

    2017-11-03

    We report on the first experimental observation of a current-driven instability developing in a quasineutral matter-antimatter beam. Strong magnetic fields (≥1  T) are measured, via means of a proton radiography technique, after the propagation of a neutral electron-positron beam through a background electron-ion plasma. The experimentally determined equipartition parameter of ε_{B}≈10^{-3} is typical of values inferred from models of astrophysical gamma-ray bursts, in which the relativistic flows are also expected to be pair dominated. The data, supported by particle-in-cell simulations and simple analytical estimates, indicate that these magnetic fields persist in the background plasma for thousands of inverse plasma frequencies. The existence of such long-lived magnetic fields can be related to analog astrophysical systems, such as those prevalent in lepton-dominated jets.

  15. Experimental Observation of a Current-Driven Instability in a Neutral Electron-Positron Beam

    NASA Astrophysics Data System (ADS)

    Warwick, J.; Dzelzainis, T.; Dieckmann, M. E.; Schumaker, W.; Doria, D.; Romagnani, L.; Poder, K.; Cole, J. M.; Alejo, A.; Yeung, M.; Krushelnick, K.; Mangles, S. P. D.; Najmudin, Z.; Reville, B.; Samarin, G. M.; Symes, D. D.; Thomas, A. G. R.; Borghesi, M.; Sarri, G.

    2017-11-01

    We report on the first experimental observation of a current-driven instability developing in a quasineutral matter-antimatter beam. Strong magnetic fields (≥1 T ) are measured, via means of a proton radiography technique, after the propagation of a neutral electron-positron beam through a background electron-ion plasma. The experimentally determined equipartition parameter of ɛB≈10-3 is typical of values inferred from models of astrophysical gamma-ray bursts, in which the relativistic flows are also expected to be pair dominated. The data, supported by particle-in-cell simulations and simple analytical estimates, indicate that these magnetic fields persist in the background plasma for thousands of inverse plasma frequencies. The existence of such long-lived magnetic fields can be related to analog astrophysical systems, such as those prevalent in lepton-dominated jets.

  16. Chaotropic salts: novel modifiers for the capillary electrophoretic analysis of benzodiazepines.

    PubMed

    Su, Hsiu-Li; Lan, Min-Tsu; Lin, Kuan-Wen; Hsieh, You-Zung

    2008-08-01

    This paper describes a CE method for analyzing benzodiazepines using the chaotropic salts lithium trifluoromethanesulfonate (LiOTf), lithium hexafluorophosphate (LiPF(6)), and lithium bis(trifluoromethanesulfonyl)imide (LiNTf(2)) as modifiers in the running buffer. Although adequate resolution of seven benzodiazepine analytes occurred under the influence of each of the chaotropic anions, the separation efficiency was highest when bis(trifluoromethanesulfonyl)imide (Tf(2)N(-)) was the modifier. We applied affinity CE in conjunction with linear analysis to determine the association constants for the formation of complexes between the Tf(2)N(-) anion and the benzodiazepines. According to the estimated Gibbs free energies, the interactions between this chaotropic anion and the benzodiazepines were either ion-dipole or ion-induced dipole interactions. Adding chaotropic salts as modifiers into CE buffers is a simple and reproducible technique for separating benzodiazepines.

  17. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  18. Relaxation in a two-body Fermi-Pasta-Ulam system in the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Sen, Surajit; Barrett, Tyler

    The study of the dynamics of the Fermi-Pasta-Ulam (FPU) chain remains a challenging problem. Inspired by the recent work of Onorato et al. on thermalization in the FPU system, we report a study of relaxation processes in a two-body FPU system in the canonical ensemble. The studies have been carried out using the Recurrence Relations Method introduced by Zwanzig, Mori, Lee and others. We have obtained exact analytical expressions for the first thirteen levels of the continued fraction representation of the Laplace transformed velocity autocorrelation function of the system. Using simple and reasonable extrapolation schemes and known limits we are able to estimate the relaxation behavior of the oscillators in the two-body FPU system and recover the expected behavior in the harmonic limit. Generalizations of the calculations to larger systems will be discussed.

  19. Evanescent field-based optical fiber sensing device for measuring the refractive index of liquids in microfluidic channels.

    PubMed

    Polynkin, PaveL; Polynkin, Alexander; Peyghambarian, N; Mansuripur, Masud

    2005-06-01

    We report a simple optical sensing device capable of measuring the refractive index of liquids propagating in microfluidic channels. The sensor is based on a single-mode optical fiber that is tapered to submicrometer dimensions and immersed in a transparent curable soft polymer. A channel for liquid analyte is created in the immediate vicinity of the taper waist. Light propagating through the tapered section of the fiber extends into the channel, making the optical loss in the system sensitive to the refractive-index difference between the polymer and the liquid. The fabrication process and testing of the prototype sensing devices are described. The sensor can operate both as a highly responsive on-off device and in the continuous measurement mode, with an estimated accuracy of refractive-index measurement of approximately 5 x 10(-4).

  20. An efficient absorbing system for spectrophotometric determination of nitrogen dioxide

    NASA Astrophysics Data System (ADS)

    Kaveeshwar, Rachana; Amlathe, Sulbha; Gupta, V. K.

    A simple and sensitive spectrophotometric method for determination of atmospheric nitrogen dioxide using o-nitroaniline as an efficient absorbing, as well as diazotizing, reagent is described. o-Nitroaniline present in the absorbing medium is diazotized by the absorbed nitrite ion to form diazonium compound. This is later coupled with 1-amino-2-naphthalene sulphonic acid (ANSA) in acidic medium to give red-violet-coloured dye,having λmax = 545 nm. The isoamyl extract of the red azo dye has λmax = 530 nm. The proposed reagents has ≈ 100% collection efficiency and the stoichiometric ratio of NO 2:NO 2- is 0.74. The other important analytical parameters have been investigated. By employing solvent extraction the sensitivity of the reaction was increased and up to 0.03 mg m -3 nitrogen dioxide could be estimated.

  1. 15 CFR 921.13 - Management plan and environmental impact statement development.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... simple property interest (e.g., conservation easement), fee simple property acquisition, or a combination... simple options) to establish adequate long-term state control; an estimate of the fair market value of any property interest—which is proposed for acquisition; a schedule estimating the time required to...

  2. Simple waves in a two-component Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Ivanov, S. K.; Kamchatnov, A. M.

    2018-04-01

    We study the dynamics of so-called simple waves in a two-component Bose-Einstein condensate. The evolution of the condensate is described by Gross-Pitaevskii equations which can be reduced for these simple wave solutions to a system of ordinary differential equations which coincide with those derived by Ovsyannikov for the two-layer fluid dynamics. We solve the Ovsyannikov system for two typical situations of large and small difference between interspecies and intraspecies nonlinear interaction constants. Our analytic results are confirmed by numerical simulations.

  3. Spin Seebeck effect in a simple ferromagnet near T c: a Ginzburg-Landau approach

    NASA Astrophysics Data System (ADS)

    Adachi, Hiroto; Yamamoto, Yutaka; Ichioka, Masanori

    2018-04-01

    A time-dependent Ginzburg-Landau theory is used to examine the longitudinal spin Seebeck effect in a simple ferromagnet in the vicinity of the Curie temperature T c. It is shown analytically that the spin Seebeck effect is proportional to the magnetization near T c, whose result is in line with the previous numerical finding. It is argued that the present result can be tested experimentally using a simple magnetic system such as EuO/Pt or EuS/Pt.

  4. Sampling considerations for modal analysis with damping

    NASA Astrophysics Data System (ADS)

    Park, Jae Young; Wakin, Michael B.; Gilbert, Anna C.

    2015-03-01

    Structural health monitoring (SHM) systems are critical for monitoring aging infrastructure (such as buildings or bridges) in a cost-effective manner. Wireless sensor networks that sample vibration data over time are particularly appealing for SHM applications due to their flexibility and low cost. However, in order to extend the battery life of wireless sensor nodes, it is essential to minimize the amount of vibration data these sensors must collect and transmit. In recent work, we have studied the performance of the Singular Value Decomposition (SVD) applied to the collection of data and provided new finite sample analysis characterizing conditions under which this simple technique{also known as the Proper Orthogonal Decomposition (POD){can correctly estimate the mode shapes of the structure. Specifically, we provided theoretical guarantees on the number and duration of samples required in order to estimate a structure's mode shapes to a desired level of accuracy. In that previous work, however, we considered simplified Multiple-Degree-Of-Freedom (MDOF) systems with no damping. In this paper we consider MDOF systems with proportional damping and show that, with sufficiently light damping, the POD can continue to provide accurate estimates of a structure's mode shapes. We support our discussion with new analytical insight and experimental demonstrations. In particular, we study the tradeoffs between the level of damping, the sampling rate and duration, and the accuracy to which the structure's mode shapes can be estimated.

  5. Application of analytical quality by design principles for the determination of alkyl p-toluenesulfonates impurities in Aprepitant by HPLC. Validation using total-error concept.

    PubMed

    Zacharis, Constantinos K; Vastardi, Elli

    2018-02-20

    In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. An Applet to Estimate the IOP-Induced Stress and Strain within the Optic Nerve Head

    PubMed Central

    2011-01-01

    Purpose. The ability to predict the biomechanical response of the optic nerve head (ONH) to intraocular pressure (IOP) elevation holds great promise, yet remains elusive. The objective of this work was to introduce an approach to model ONH biomechanics that combines the ease of use and speed of analytical models with the flexibility and power of numerical models. Methods. Models representing a variety of ONHs were produced, and finite element (FE) techniques used to predict the stresses (forces) and strains (relative deformations) induced on each of the models by IOP elevations (up to 10 mm Hg). Multivariate regression was used to parameterize each biomechanical response as an analytical function. These functions were encoded into a Flash-based applet. Applet utility was demonstrated by investigating hypotheses concerning ONH biomechanics posited in the literature. Results. All responses were parameterized well by polynomials (R2 values between 0.985 and 0.999), demonstrating the effectiveness of our fitting approach. Previously published univariate results were reproduced with the applet in seconds. A few minutes allowed for multivariate analysis, with which it was predicted that often, but not always, larger eyes experience higher levels of stress and strain than smaller ones, even at the same IOP. Conclusions. An applet has been presented with which it is simple to make rapid estimates of IOP-related ONH biomechanics. The applet represents a step toward bringing the power of FE modeling beyond the specialized laboratory and can thus help develop more refined biomechanics-based hypotheses. The applet is available for use at www.ocularbiomechanics.com. PMID:21527378

  7. Prescriptive Oriented Drug Analysis of Multiple Sclerosis Disease by LC-UV in Whole Human Blood.

    PubMed

    Suneetha, A; Rajeswari, Raja K

    2016-02-01

    As a polytherapy treatment, multiple sclerosis disease demands prescriptions with more than one drug. Polytherapy is sometimes rational for drug combinations chosen to minimize adverse effects. Estimation of drugs that are concomitantly administered in polytherapy is acceptable as it shortens the analytical timepoints and also the usage of biological matrices. In clinical phase trials, the withdrawal of biofluids is a critical issue for each analysis. Estimating all the coadminsitered drugs in a single shot will be more effective and economical for pharmaceuticals. A single, simple, rapid and sensitive high-performance liquid chromatography assay method has been developed with UV detection and fully validated for the quantification of 14 drugs (at random combinations) used in the treatment of multiple sclerosis disease. The set of combinations was based on prescriptions to patients. Separations were achieved on an X-Terra MS C18 (100 × 3.9 mm, 5 µm) column. The analytes were extracted from 50 µL aliquots of whole human blood with protein precipitation using acetonitrile. All the drugs were sufficiently stable during storage for 24 h at room temperature and for 23 days at 2-8°C. The percentage recoveries of all drugs were between 90 and 115%, with RSD values <10.6%. This method has been shown to be reproducible and sensitive and can be applied to clinical samples from pharmacokinetic studies and also a useful tool in studying the drug interaction studies. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Human-machine analytics for closed-loop sense-making in time-dominant cyber defense problems

    NASA Astrophysics Data System (ADS)

    Henry, Matthew H.

    2017-05-01

    Many defense problems are time-dominant: attacks progress at speeds that outpace human-centric systems designed for monitoring and response. Despite this shortcoming, these well-honed and ostensibly reliable systems pervade most domains, including cyberspace. The argument that often prevails when considering the automation of defense is that while technological systems are suitable for simple, well-defined tasks, only humans possess sufficiently nuanced understanding of problems to act appropriately under complicated circumstances. While this perspective is founded in verifiable truths, it does not account for a middle ground in which human-managed technological capabilities extend well into the territory of complex reasoning, thereby automating more nuanced sense-making and dramatically increasing the speed at which it can be applied. Snort1 and platforms like it enable humans to build, refine, and deploy sense-making tools for network defense. Shortcomings of these platforms include a reliance on rule-based logic, which confounds analyst knowledge of how bad actors behave with the means by which bad behaviors can be detected, and a lack of feedback-informed automation of sensor deployment. We propose an approach in which human-specified computational models hypothesize bad behaviors independent of indicators and then allocate sensors to estimate and forecast the state of an intrusion. State estimates and forecasts inform the proactive deployment of additional sensors and detection logic, thereby closing the sense-making loop. All the while, humans are on the loop, rather than in it, permitting nuanced management of fast-acting automated measurement, detection, and inference engines. This paper motivates and conceptualizes analytics to facilitate this human-machine partnership.

  9. Spectrophotometric Analysis of Phenolic Compounds in Grapes and Wines.

    PubMed

    Aleixandre-Tudo, Jose Luis; Buica, Astrid; Nieuwoudt, Helene; Aleixandre, Jose Luis; du Toit, Wessel

    2017-05-24

    Phenolic compounds are of crucial importance for red wine color and mouthfeel attributes. A large number of enzymatic and chemical reactions involving phenolic compounds take place during winemaking and aging. Despite the large number of published analytical methods for phenolic analyses, the values obtained may vary considerably. In addition, the existing scientific knowledge needs to be updated, but also critically evaluated and simplified for newcomers and wine industry partners. The most used and widely cited spectrophotometric methods for grape and wine phenolic analysis were identified through a bibliometric search using the Science Citation Index-Expanded (SCIE) database accessed through the Web of Science (WOS) platform from Thompson Reuters. The selection of spectrophotometry was based on its ease of use as a routine analytical technique. On the basis of the number of citations, as well as the advantages and disadvantages reported, the modified Somers assay appears as a multistep, simple, and robust procedure that provides a good estimation of the state of the anthocyanins equilibria. Precipitation methods for total tannin levels have also been identified as preferred protocols for these types of compounds. Good reported correlations between methods (methylcellulose precipitable vs bovine serum albumin) and between these and perceived red wine astringency, in combination with the adaptation to high-throughput format, make them suitable for routine analysis. The bovine serum albumin tannin assay also allows for the estimation of the anthocyanins content with the measurement of small and large polymeric pigments. Finally, the measurement of wine color using the CIELab space approach is also suggested as the protocol of choice as it provides good insight into the wine's color properties.

  10. A Modular GIS-Based Software Architecture for Model Parameter Estimation using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.

    2012-12-01

    The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.

  11. A Structural Framework for a Near-Minimal Form of Life: Mass and Compositional Analysis of the Helical Mollicute Spiroplasma melliferum BC3

    PubMed Central

    Trachtenberg, Shlomo; Schuck, Peter; Phillips, Terry M.; Andrews, S. Brian; Leapman, Richard D.

    2014-01-01

    Spiroplasma melliferum is a wall-less bacterium with dynamic helical geometry. This organism is geometrically well defined and internally well ordered, and has an exceedingly small genome. Individual cells are chemotactic, polar, and swim actively. Their dynamic helicity can be traced at the molecular level to a highly ordered linear motor (composed essentially of the proteins fib and MreB) that is positioned on a defined helical line along the internal face of the cell’s membrane. Using an array of complementary, informationally overlapping approaches, we have taken advantage of this uniquely simple, near-minimal life-form and its helical geometry to analyze the copy numbers of Spiroplasma’s essential parts, as well as to elucidate how these components are spatially organized to subserve the whole living cell. Scanning transmission electron microscopy (STEM) was used to measure the mass-per-length and mass-per-area of whole cells, membrane fractions, intact cytoskeletons and cytoskeletal components. These local data were fit into whole-cell geometric parameters determined by a variety of light microscopy modalities. Hydrodynamic data obtained by analytical ultracentrifugation allowed computation of the hydration state of whole living cells, for which the relative amounts of protein, lipid, carbohydrate, DNA, and RNA were also estimated analytically. Finally, ribosome and RNA content, genome size and gene expression were also estimated (using stereology, spectroscopy and 2D-gel analysis, respectively). Taken together, the results provide a general framework for a minimal inventory and arrangement of the major cellular components needed to support life. PMID:24586297

  12. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  13. Stretchy binary classification.

    PubMed

    Toh, Kar-Ann; Lin, Zhiping; Sun, Lei; Li, Zhengguo

    2018-01-01

    In this article, we introduce an analytic formulation for compressive binary classification. The formulation seeks to solve the least ℓ p -norm of the parameter vector subject to a classification error constraint. An analytic and stretchable estimation is conjectured where the estimation can be viewed as an extension of the pseudoinverse with left and right constructions. Our variance analysis indicates that the estimation based on the left pseudoinverse is unbiased and the estimation based on the right pseudoinverse is biased. Sparseness can be obtained for the biased estimation under certain mild conditions. The proposed estimation is investigated numerically using both synthetic and real-world data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Development of a carbon-nanoparticle-coated stirrer for stir bar sorptive extraction by a simple carbon deposition in flame.

    PubMed

    Feng, Juanjuan; Sun, Min; Bu, Yanan; Luo, Chuannan

    2016-03-01

    Stir bar sorptive extraction is an environmentally friendly microextraction technique based on a stir bar with various sorbents. A commercial stirrer is a good support, but it has not been used in stir bar sorptive extraction due to difficult modification. A stirrer was modified with carbon nanoparticles by a simple carbon deposition process in flame and characterized by scanning electron microscopy and energy-dispersive X-ray spectrometry. A three-dimensional porous coating was formed with carbon nanoparticles. In combination with high-performance liquid chromatography, the stir bar was evaluated using five polycyclic aromatic hydrocarbons as model analytes. Conditions including extraction time and temperature, ionic strength, and desorption solvent were investigated by a factor-by-factor optimization method. The established method exhibited good linearity (0.01-10 μg/L) and low limits of quantification (0.01 μg/L). It was applied to detect model analytes in environmental water samples. No analyte was detected in river water, and five analytes were quantified in rain water. The recoveries of five analytes in two samples with spiked at 2 μg/L were in the range of 92.2-106% and 93.4-108%, respectively. The results indicated that the carbon nanoparticle-coated stirrer was an efficient stir bar for extraction analysis of some polycyclic aromatic hydrocarbons. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    NASA Astrophysics Data System (ADS)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  16. Estimating corresponding locations in ipsilateral breast tomosynthesis views

    NASA Astrophysics Data System (ADS)

    van Schie, Guido; Tanner, Christine; Karssemeijer, Nico

    2011-03-01

    To improve cancer detection in mammography, breast exams usually consist of two views per breast. To combine information from both views, radiologists and multiview computer-aided detection (CAD) systems need to match corresponding regions in the two views. In digital breast tomosynthesis (DBT), finding corresponding regions in ipsilateral volumes may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. In this study we developed a method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a mathematical transformation. First a compressed breast model is matched to the tomosynthesis view containing a point of interest. Then we decompress, rotate and compress again to estimate the location of the corresponding point in the ipsilateral view. In this study we use a simple elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. The model is matched to the volume by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation we annotated 181 landmarks in both views and applied our method to each location. Results show a median 3D distance between the actual location and estimated location of 1.5 cm; a good starting point for a feature based local search method to link lesions for a multiview CAD system. Half of the estimated locations were at most 1 slice away from the actual location, making our method useful as a tool in mammographic workstations to interactively find corresponding locations in ipsilateral tomosynthesis views.

  17. On-orbit point spread function estimation for THEOS imaging system

    NASA Astrophysics Data System (ADS)

    Khetkeeree, Suphongsa; Liangrocapart, Sompong

    2018-03-01

    In this paper, we present two approaches for net Point Spread Function (net-PSF) estimation of Thailand Earth Observation System (THEOS) imaging system. In the first approach, we estimate the net- PSF by employing the specification information of the satellite. The analytic model of the net- PSF based on the simple model of push-broom imaging system. This model consists of a scanner, optical system, detector and electronics system. The mathematical PSF model of each component is demonstrated in spatial domain. In the second approach, the specific target images from THEOS imaging system are analyzed to determine the net-PSF. For panchromatic imaging system, the images of the checkerboard target at Salon de Provence airport are used to analysis the net-PSF by slant-edge method. For multispectral imaging system, the new man-made targets are proposed. It is a pier bridge in Lamchabang, Chonburi, Thailand. This place has had a lot of bridges which have several width sizes and orientation. The pulse method is used to analysis the images of this bridge for estimating the net-PSF. Finally, the Full Width at Half Maximums (FWHMs) of the net-PSF of both approaches is compared. The results show that both approaches coincide and all Modulation Transfer Functions (MTFs) at Nyquist of both approaches are better than the requirement. However, the FWHM of multispectral system more deviate than panchromatic system, because the targets are not specially constructed for estimating the characteristics of the satellite imaging system.

  18. Permeability estimations and frictional flow features passing through porous media comprised of structured microbeads

    NASA Astrophysics Data System (ADS)

    Shin, C.

    2017-12-01

    Permeability estimation has been extensively researched in diverse fields; however, methods that suitably consider varying geometries and changes within the flow region, for example, hydraulic fracture closing for several years, are yet to be developed. Therefore, in the present study a new permeability estimation method is presented based on the generalized Darcy's friction flow relation, in particular, by examining frictional flow parameters and characteristics of their variations. For this examination, computational fluid dynamics (CFD) simulations of simple hydraulic fractures filled with five layers of structured microbeads and accompanied by geometry changes and flow transitions are performed. Consequently, it was checked whether the main structures and shapes of each flow path are preserved, even for geometry variations within porous media. However, the scarcity and discontinuity of streamlines increase dramatically in the transient- and turbulent-flow regions. The quantitative and analytic examinations of the frictional flow features were also performed. Accordingly, the modified frictional flow parameters were successfully presented as similarity parameters of porous flows. In conclusion, the generalized Darcy's friction flow relation and friction equivalent permeability (FEP) equation were both modified using the similarity parameters. For verification, the FEP values of the other aperture models were estimated and then it was checked whether they agreed well with the original permeability values. Ultimately, the proposed and verified method is expected to efficiently estimate permeability variations in porous media with changing geometric factors and flow regions, including such instances as hydraulic fracture closings.

  19. Tigers and their prey: Predicting carnivore densities from prey abundance

    USGS Publications Warehouse

    Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Link, W.A.; Hines, J.E.

    2004-01-01

    The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995-2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2-16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.

  20. Oscillations and Multiple Equilibria in Microvascular Blood Flow.

    PubMed

    Karst, Nathaniel J; Storey, Brian D; Geddes, John B

    2015-07-01

    We investigate the existence of oscillatory dynamics and multiple steady-state flow rates in a network with a simple topology and in vivo microvascular blood flow constitutive laws. Unlike many previous analytic studies, we employ the most biologically relevant models of the physical properties of whole blood. Through a combination of analytic and numeric techniques, we predict in a series of two-parameter bifurcation diagrams a range of dynamical behaviors, including multiple equilibria flow configurations, simple oscillations in volumetric flow rate, and multiple coexistent limit cycles at physically realizable parameters. We show that complexity in network topology is not necessary for complex behaviors to arise and that nonlinear rheology, in particular the plasma skimming effect, is sufficient to support oscillatory dynamics similar to those observed in vivo.

  1. A simple way to synthesize large-scale Cu2O/Ag nanoflowers for ultrasensitive surface-enhanced Raman scattering detection

    NASA Astrophysics Data System (ADS)

    Zou, Junyan; Song, Weijia; Xie, Weiguang; Huang, Bo; Yang, Huidong; Luo, Zhi

    2018-03-01

    Here, we report a simple strategy to prepare highly sensitive surface-enhanced Raman spectroscopy (SERS) substrates based on Ag decorated Cu2O nanoparticles by combining two common techniques, viz, thermal oxidation growth of Cu2O nanoparticles and magnetron sputtering fabrication of a Ag nanoparticle film. Methylene blue is used as the Raman analyte for the SERS study, and the substrates fabricated under optimized conditions have very good sensitivity (analytical enhancement factor ˜108), stability, and reproducibility. A linear dependence of the SERS intensities with the concentration was obtained with an R 2 value >0.9. These excellent properties indicate that the substrate has great potential in the detection of biological and chemical substances.

  2. Bioanalytical qualification of clinical biomarker assays in plasma using a novel multi-analyte Simple Plex™ platform.

    PubMed

    Gupta, Vinita; Davancaze, Teresa; Good, Jeremy; Kalia, Navdeep; Anderson, Michael; Wallin, Jeffrey J; Brady, Ann; Song, An; Xu, Wenfeng

    2016-12-01

    Immune-checkpoint inhibitors are presumed to break down the tolerogenic state of immune cells by activating T-lymphocytes that release cytokines and enhance effector cell function for elimination of tumors. Measurement of cytokines is being pursued for better understanding of the mechanism of action of immune-checkpoint inhibitors, as well as to identify potential predictive biomarkers. In this study, we show bioanalytical qualification of cytokine assays in plasma on a novel multi-analyte immunoassay platform, Simple Plex ™ . The qualified assays exhibited excellent sensitivity as evidenced by measurement of all samples within the quantifiable range. The accuracy and precision were 80-120% and 10%, respectively. The qualified assays will be useful in assessing mechanism of action cancer immunotherapies.

  3. Analytical model for minority games with evolutionary learning

    NASA Astrophysics Data System (ADS)

    Campos, Daniel; Méndez, Vicenç; Llebot, Josep E.; Hernández, Germán A.

    2010-06-01

    In a recent work [D. Campos, J.E. Llebot, V. Méndez, Theor. Popul. Biol. 74 (2009) 16] we have introduced a biological version of the Evolutionary Minority Game that tries to reproduce the intraspecific competition for limited resources in an ecosystem. In comparison with the complex decision-making mechanisms used in standard Minority Games, only two extremely simple strategies ( juveniles and adults) are accessible to the agents. Complexity is introduced instead through an evolutionary learning rule that allows younger agents to learn taking better decisions. We find that this game shows many of the typical properties found for Evolutionary Minority Games, like self-segregation behavior or the existence of an oscillation phase for a certain range of the parameter values. However, an analytical treatment becomes much easier in our case, taking advantage of the simple strategies considered. Using a model consisting of a simple dynamical system, the phase diagram of the game (which differentiates three phases: adults crowd, juveniles crowd and oscillations) is reproduced.

  4. The analysis of non-linear dynamic behavior (including snap-through) of postbuckled plates by simple analytical solution

    NASA Technical Reports Server (NTRS)

    Ng, C. F.

    1988-01-01

    Static postbuckling and nonlinear dynamic analysis of plates are usually accomplished by multimode analyses, although the methods are complicated and do not give straightforward understanding of the nonlinear behavior. Assuming single-mode transverse displacement, a simple formula is derived for the transverse load displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the static postbuckling displacement and nonlinear dynamic responses of postbuckled plates under sinusoidal or random excitation. Regions with softening and hardening spring behavior are identified. Also, the highly nonlinear motion of snap-through and its effects on the overall dynamic response can be easily interpreted using the single-mode formula. Theoretical results are compared with experimental results obtained using a buckled aluminum panel, using discrete frequency and broadband point excitation. Some important effects of the snap-through motion on the dynamic response of the postbuckled plates are found.

  5. Analytical estimation show low depth-independent water loss due to vapor flux from deep aquifers

    NASA Astrophysics Data System (ADS)

    Selker, John S.

    2017-06-01

    Recent articles have provided estimates of evaporative flux from water tables in deserts that span 5 orders of magnitude. In this paper, we present an analytical calculation that indicates aquifer vapor flux to be limited to 0.01 mm/yr for sites where there is negligible recharge and the water table is well over 20 m below the surface. This value arises from the geothermal gradient, and therefore, is nearly independent of the actual depth of the aquifer. The value is in agreement with several numerical studies, but is 500 times lower than recently reported experimental values, and 100 times larger than an earlier analytical estimate.

  6. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  7. How to Recognize Success and Failure: Practical Assessment of an Evolving, First-Semester Laboratory Program Using Simple, Outcome-Based Tools

    ERIC Educational Resources Information Center

    Gron, Liz U.; Bradley, Shelly B.; McKenzie, Jennifer R.; Shinn, Sara E.; Teague, M. Warfield

    2013-01-01

    This paper presents the use of simple, outcome-based assessment tools to design and evaluate the first semester of a new introductory laboratory program created to teach green analytical chemistry using environmental samples. This general chemistry laboratory program, like many introductory courses, has a wide array of stakeholders within and…

  8. Avoiding the Complex History, Simple Answer Syndrome: A Lesson Plan for Providing Depth and Analysis in the High School History Classroom

    ERIC Educational Resources Information Center

    Lindquist, David H.

    2012-01-01

    Examining history from the perspective of investigators who wrestle with involved scenarios for which no simple answers exist, or from which no obvious conclusions can be drawn, allows students to understand the historiographic process and the complex nature of historical events, while gaining valuable practice in applying analytical and critical…

  9. Use of Cdse/ZnS quantum dots for sensitive detection and quantification of paraquat in water samples.

    PubMed

    Durán, Gema M; Contento, Ana M; Ríos, Ángel

    2013-11-01

    Based on the highly sensitive fluorescence change of water-soluble CdSe/ZnS core-shell quantum dots (QD) by paraquat herbicide, a simple, rapid and reproducible methodology was developed to selectively determine paraquat (PQ) in water samples. The methodology enabled the use of simple pretreatment procedure based on the simple water solubilization of CdSe/ZnS QDs with hydrophilic heterobifunctional thiol ligands, such as 3-mercaptopropionic acid (3-MPA), using microwave irradiation. The resulting water-soluble QDs exhibit a strong fluorescence emission at 596 nm with a high and reproducible photostability. The proposed analytical method thus satisfies the need for a simple, sensible and rapid methodology to determine residues of paraquat in water samples, as required by the increasingly strict regulations for health protection introduced in recent years. The sensitivity of the method, expressed as detection limits, was as low as 3.0 ng L(-1). The lineal range was between 10-5×10(3) ng L(-1). RSD values in the range of 71-102% were obtained. The analytical applicability of proposed method was demonstrated by analyzing water samples from different procedence. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Artificial neural networks for AC losses prediction in superconducting round filaments

    NASA Astrophysics Data System (ADS)

    Leclerc, J.; Makong Hell, L.; Lorin, C.; Masson, P. J.

    2016-06-01

    An extensive and fast method to estimate superconducting AC losses within a superconducting round filament carrying an AC current and subjected to an elliptical magnetic field (both rotating and oscillating) is presented. Elliptical fields are present in rotating machine stators and being able to accurately predict AC losses in fully superconducting machines is paramount to generating realistic machine designs. The proposed method relies on an analytical scaling law (ASL) combined with two artificial neural network (ANN) estimators taking 9 input parameters representing the superconductor, external field and transport current characteristics. The ANNs are trained with data generated by finite element (FE) computations with a commercial software (FlexPDE) based on the widely accepted H-formulation. After completion, the model is validated through comparison with additional randomly chosen data points and compared for simple field configurations to other predictive models. The loss estimation discrepancy is about 3% on average compared to the FEA analysis. The main advantages of the model compared to FE simulations is the fast computation time (few milliseconds) which allows it to be used in iterated design processes of fully superconducting machines. In addition, the proposed model provides a higher level of fidelity than the scaling laws existing in literature usually only considering pure AC field.

  11. Approximate Bayesian evaluations of measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  12. Sampling hazelnuts for aflatoxin: uncertainty associated with sampling, sample preparation, and analysis.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis

    2006-01-01

    The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.

  13. Integrated Analytical Evaluation and Optimization of Model Parameters against Preprocessed Measurement Data

    DTIC Science & Technology

    1989-06-23

    Iterations .......................... 86 3.2 Comparison between MACH and POLAR ......................... 90 3.3 Flow Chart for VSTS Algorithm...The most recent changes are: a) development of the VSTS (velocity space topology search) algorithm for calculating particle densities b) extension...with simple analytic models. The largest modification of the MACH code was the implementation of the VSTS procedure, which constituted a complete

  14. Analytical solution and simplified analysis of coupled parent-daughter steady-state transport with multirate mass transfer

    Treesearch

    R. Haggerty

    2013-01-01

    In this technical note, a steady-state analytical solution of concentrations of a parent solute reacting to a daughter solute, both of which are undergoing transport and multirate mass transfer, is presented. Although the governing equations are complicated, the resulting solution can be expressed in simple terms. A function of the ratio of concentrations, In (daughter...

  15. Analytical study of shimmy of airplane wheels

    NASA Technical Reports Server (NTRS)

    Bourcier De Carbon, Christian

    1952-01-01

    The problem of shimmy of a castering wheel, such as the nose wheel of a tricycle gear airplane, is treated analytically. The flexibility of the tire is considered to be the primary cause of shimmy. The rather simple theory developed agrees rather well with previous experimental results. The author suggests that shimmy may be eliminated through a suitable choice of landing gear dimensions in lieu of a damper.

  16. EFFECTS OF LASER RADIATION ON MATTER. LASER PLASMA: Spatial-temporal distribution of a mechanical load resulting from interaction of laser radiation with a barrier (analytic model)

    NASA Astrophysics Data System (ADS)

    Fedyushin, B. T.

    1992-01-01

    The concepts developed earlier are used to propose a simple analytic model describing the spatial-temporal distribution of a mechanical load (pressure, impulse) resulting from interaction of laser radiation with a planar barrier surrounded by air. The correctness of the model is supported by a comparison with experimental results.

  17. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    ERIC Educational Resources Information Center

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  18. A Statistical Physicist's Approach to Biological Motion: From the the Kinesin Walk to Muscle Contraction

    NASA Astrophysics Data System (ADS)

    Vicsek, Tamas

    1997-03-01

    It is demonstrated that a wide range of experimental results on biological motion can be successfully interpreted in terms of statistical physics motivated models taking into account the relevant microscopic details of motor proteins and allowing analytic solutions. Two important examples are considered, i) the motion of a single kinesin molecule along microtubules inside individual cells and ii) muscle contraction which is a macroscopic phenomenon due to the collective action of a large number of myosin heads along actin filaments. i) Recently individual two-headed kinesin molecules have been studied in in vitro motility assays revealing a number of their peculiar transport properties. Here we propose a simple and robust model for the kinesin stepping process with elastically coupled Brownian heads showing all of these properties. The analytic treatment of our model results in a very good fit to the experimental data and practically has no free parameters. ii) Myosin is an ATPase enzyme that converts the chemical energy stored in ATP molecules into mechanical work. During muscle contraction, the myosin cross-bridges attach to the actin filaments and exert force on them yielding a relative sliding of the actin and myosin filaments. In this paper we present a simple mechanochemical model for the cross-bridge interaction involving the relevant kinetic data and providing simple analytic solutions for the mechanical properties of muscle contraction, such as the force-velocity relationship or the relative number of the attached cross-bridges. So far the only analytic formula which could be fitted to the measured force-velocity curves has been the well known Hill equation containing parameters lacking clear microscopic origin. The main advantages of our new approach are that it explicitly connects the mechanical data with the kinetic data and the concentration of the ATP and ATPase products and as such it leads to new analytic solutions which agree extremely well with a wide range of experimental curves, while the parameters of the corresponding expressions have well defined microscopic meaning.

  19. Simple analytical relations for ship bow waves

    NASA Astrophysics Data System (ADS)

    Noblesse, Francis; Delhommeau, G.?Rard; Guilbaud, Michel; Hendrix, Dane; Yang, Chi

    Simple analytical relations for the bow wave generated by a ship in steady motion are given. Specifically, simple expressions that define the height of a ship bow wave, the distance between the ship stem and the crest of the bow wave, the rise of water at the stem, and the bow wave profile, explicitly and without calculations, in terms of the ship speed, draught, and waterline entrance angle, are given. Another result is a simple criterion that predicts, also directly and without calculations, when a ship in steady motion cannot generate a steady bow wave. This unsteady-flow criterion predicts that a ship with a sufficiently fine waterline, specifically with waterline entrance angle 2, may generate a steady bow wave at any speed. However, a ship with a fuller waterline (25E) can only generate a steady bow wave if the ship speed is higher than a critical speed, defined in terms of αE by a simple relation. No alternative criterion for predicting when a ship in steady motion does not generate a steady bow wave appears to exist. A simple expression for the height of an unsteady ship bow wave is also given. In spite of their remarkable simplicity, the relations for ship bow waves obtained in the study (using only rudimentary physical and mathematical considerations) are consistent with experimental measurements for a number of hull forms having non-bulbous wedge-shaped bows with small flare angle, and with the authors' measurements and observations for a rectangular flat plate towed at a yaw angle.

  20. Extension of the Peters–Belson method to estimate health disparities among multiple groups using logistic regression with survey data

    PubMed Central

    Li, Y.; Graubard, B. I.; Huang, P.; Gastwirth, J. L.

    2015-01-01

    Determining the extent of a disparity, if any, between groups of people, for example, race or gender, is of interest in many fields, including public health for medical treatment and prevention of disease. An observed difference in the mean outcome between an advantaged group (AG) and disadvantaged group (DG) can be due to differences in the distribution of relevant covariates. The Peters–Belson (PB) method fits a regression model with covariates to the AG to predict, for each DG member, their outcome measure as if they had been from the AG. The difference between the mean predicted and the mean observed outcomes of DG members is the (unexplained) disparity of interest. We focus on applying the PB method to estimate the disparity based on binary/multinomial/proportional odds logistic regression models using data collected from complex surveys with more than one DG. Estimators of the unexplained disparity, an analytic variance–covariance estimator that is based on the Taylor linearization variance–covariance estimation method, as well as a Wald test for testing a joint null hypothesis of zero for unexplained disparities between two or more minority groups and a majority group, are provided. Simulation studies with data selected from simple random sampling and cluster sampling, as well as the analyses of disparity in body mass index in the National Health and Nutrition Examination Survey 1999–2004, are conducted. Empirical results indicate that the Taylor linearization variance–covariance estimation is accurate and that the proposed Wald test maintains the nominal level. PMID:25382235

  1. On the multiple imputation variance estimator for control-based and delta-adjusted pattern mixture models.

    PubMed

    Tang, Yongqiang

    2017-12-01

    Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.

  2. Spatiotemporal interpolation of discharge across a river network by using synthetic SWOT satellite data

    NASA Astrophysics Data System (ADS)

    Paiva, Rodrigo C. D.; Durand, Michael T.; Hossain, Faisal

    2015-01-01

    Recent efforts have sought to estimate river discharge and other surface water-related quantities using spaceborne sensors, with better spatial coverage but worse temporal sampling as compared with in situ measurements. The Surface Water and Ocean Topography (SWOT) mission will provide river discharge estimates globally from space. However, questions on how to optimally use the spatially distributed but asynchronous satellite observations to generate continuous fields still exist. This paper presents a statistical model (River Kriging-RK), for estimating discharge time series in a river network in the context of the SWOT mission. RK uses discharge estimates at different locations and times to produce a continuous field using spatiotemporal kriging. A key component of RK is the space-time river discharge covariance, which was derived analytically from the diffusive wave approximation of Saint Venant's equations. The RK covariance also accounts for the loss of correlation at confluences. The model performed well in a case study on Ganges-Brahmaputra-Meghna (GBM) River system in Bangladesh using synthetic SWOT observations. The correlation model reproduced empirically derived values. RK (R2=0.83) outperformed other kriging-based methods (R2=0.80), as well as a simple time series linear interpolation (R2=0.72). RK was used to combine discharge from SWOT and in situ observations, improving estimates when the latter is included (R2=0.91). The proposed statistical concepts may eventually provide a feasible framework to estimate continuous discharge time series across a river network based on SWOT data, other altimetry missions, and/or in situ data.

  3. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  4. Coma dust scattering concepts applied to the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Fink, Uwe; Rinaldi, Giovanna

    2015-09-01

    This paper describes basic concepts, as well as providing a framework, for the interpretation of the light scattered by the dust in a cometary coma as observed by instruments on a spacecraft such as Rosetta. It is shown that the expected optical depths are small enough that single scattering can be applied. Each of the quantities that contribute to the scattered intensity is discussed in detail. Using optical constants of the likely coma dust constituents, olivine, pyroxene and carbon, the scattering properties of the dust are calculated. For the resulting observable scattering intensities several particle size distributions are considered, a simple power law, power laws with a small particle cut off and a log-normal distributions with various parameters. Within the context of a simple outflow model, the standard definition of Afρ for a circular observing aperture is expanded to an equivalent Afρ for an annulus and specific line-of-sight observation. The resulting equivalence between the observed intensity and Afρ is used to predict observable intensities for 67P/Churyumov-Gerasimenko at the spacecraft encounter near 3.3 AU and near perihelion at 1.3 AU. This is done by normalizing particle production rates of various size distributions to agree with observed ground based Afρ values. Various geometries for the column densities in a cometary coma are considered. The calculations for a simple outflow model are compared with more elaborate Direct Simulation Monte Carlo Calculation (DSMC) models to define the limits of applicability of the simpler analytical approach. Thus our analytical approach can be applied to the majority of the Rosetta coma observations, particularly beyond several nuclear radii where the dust is no longer in a collisional environment, without recourse to computer intensive DSMC calculations for specific cases. In addition to a spherically symmetric 1-dimensional approach we investigate column densities for the 2-dimensional DSMC model on the day and night side of the comet. Our calculations are also applied to estimates of the dust particle densities and flux which are useful for the in-situ experiments on Rosetta.

  5. Practical estimates of field-saturated hydraulic conductivity of bedrock outcrops using a modified bottomless bucket method

    USGS Publications Warehouse

    Mirus, Benjamin B.; Perkins, Kim S.

    2012-01-01

    The bottomless bucket (BB) approach (Nimmo et al., 2009a) is a cost-effective method for rapidly characterizing field-saturated hydraulic conductivity Kfs of soils and alluvial deposits. This practical approach is of particular value for quantifying infiltration rates in remote areas with limited accessibility. A similar approach for bedrock outcrops is also of great value for improving quantitative understanding of infiltration and recharge in rugged terrain. We develop a simple modification to the BB method for application to bedrock outcrops, which uses a non-toxic, quick-drying silicone gel to seal the BB to the bedrock. These modifications to the field method require only minor changes to the analytical solution for calculating Kfs on soils. We investigate the reproducibility of the method with laboratory experiments on a previously studied calcarenite rock and conduct a sensitivity analysis to quantify uncertainty in our predictions. We apply the BB method on both bedrock and soil for sites on Pahute Mesa, which is located in a remote area of the Nevada National Security Site. The bedrock BB tests may require monitoring over several hours to days, depending on infiltration rates, which necessitates a cover to prevent evaporative losses. Our field and laboratory results compare well to Kfs values inferred from independent reports, which suggests the modified BB method can provide useful estimates and facilitate simple hypothesis testing. The ease with which the bedrock BB method can be deployed should facilitate more rapid in-situ data collection than is possible with alternative methods for quantitative characterization of infiltration into bedrock.

  6. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  7. Light-emitting diodes for analytical chemistry.

    PubMed

    Macka, Mirek; Piasecki, Tomasz; Dasgupta, Purnendu K

    2014-01-01

    Light-emitting diodes (LEDs) are playing increasingly important roles in analytical chemistry, from the final analysis stage to photoreactors for analyte conversion to actual fabrication of and incorporation in microdevices for analytical use. The extremely fast turn-on/off rates of LEDs have made possible simple approaches to fluorescence lifetime measurement. Although they are increasingly being used as detectors, their wavelength selectivity as detectors has rarely been exploited. From their first proposed use for absorbance measurement in 1970, LEDs have been used in analytical chemistry in too many ways to make a comprehensive review possible. Hence, we critically review here the more recent literature on their use in optical detection and measurement systems. Cloudy as our crystal ball may be, we express our views on the future applications of LEDs in analytical chemistry: The horizon will certainly become wider as LEDs in the deep UV with sufficient intensity become available.

  8. Numerical modeling and analytical evaluation of light absorption by gold nanostars

    NASA Astrophysics Data System (ADS)

    Zarkov, Sergey; Akchurin, Georgy; Yakunin, Alexander; Avetisyan, Yuri; Akchurin, Garif; Tuchin, Valery

    2018-04-01

    In this paper, the regularity of local light absorption by gold nanostars (AuNSts) model is studied by method of numerical simulation. The mutual diffraction influence of individual geometric fragments of AuNSts is analyzed. A comparison is made with an approximate analytical approach for estimating the average bulk density of absorbed power and total absorbed power by individual geometric fragments of AuNSts. It is shown that the results of the approximate analytical estimate are in qualitative agreement with the numerical calculations of the light absorption by AuNSts.

  9. A Simple Laboratory Class Using a "Pseudomonas aeruginosa" Auxotroph to Illustrate UV-Mutagenic Killing, DNA Photorepair and Mutagenic DNA Repair

    ERIC Educational Resources Information Center

    Sobrero, Patricio; Valverde, Claudio

    2013-01-01

    A simple and cheap laboratory class is proposed to illustrate the lethal effect of UV radiation on bacteria and the operation of different DNA repair mechanisms. The class is divided into two sessions, an initial 3-hour experimental session and a second 2-hour analytical session. The experimental session involves two separate experiments: one…

  10. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  11. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  12. Estimation of the simple correlation coefficient.

    PubMed

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  13. Analytical expressions for stability regions in the Ince-Strutt diagram of Mathieu equation

    NASA Astrophysics Data System (ADS)

    Butikov, Eugene I.

    2018-04-01

    Simple analytical expressions are suggested for transition curves that separate, in the Ince-Strutt diagram, different types of solutions to the famous Mathieu equation. The derivations of these expressions in this paper rely on physically meaningful periodic solutions describing various regular motions of a familiar nonlinear mechanical system—a rigid planar pendulum with a vertically oscillating pivot. The paper is accompanied by a relevant simulation program.

  14. On the theory of evolution of particulate systems

    NASA Astrophysics Data System (ADS)

    Buyevich, Yuri A.; Alexandrov, Dmitri V.

    2017-04-01

    An analytical method for the description of particulate systems at sufficiently long times is developed. This method allows us to obtain very simple analytical expressions for the particle distribution function. The method under consideration can be applied to a number of practically important problems including evaporation of a polydisperse mist, dissolution of dispersed solids, combustion of dispersed propellants, physical and chemical transformation of powders and phase transitions in metastable materials.

  15. Analytical solutions of Landau (1+1)-dimensional hydrodynamics

    DOE PAGES

    Wong, Cheuk-Yin; Sen, Abhisek; Gerhard, Jochen; ...

    2014-12-17

    To help guide our intuition, summarize important features, and point out essential elements, we review the analytical solutions of Landau (1+1)-dimensional hydrodynamics and exhibit the full evolution of the dynamics from the very beginning to subsequent times. Special emphasis is placed on the matching and the interplay between the Khalatnikov solution and the Riemann simple wave solution at the earliest times and in the edge regions at later times.

  16. General Procedure for the Easy Calculation of pH in an Introductory Course of General or Analytical Chemistry

    ERIC Educational Resources Information Center

    Cepriá, Gemma; Salvatella, Luis

    2014-01-01

    All pH calculations for simple acid-base systems used in introductory courses on general or analytical chemistry can be carried out by using a general procedure requiring the use of predominance diagrams. In particular, the pH is calculated as the sum of an independent term equaling the average pK[subscript a] values of the acids involved in the…

  17. A new frequency approach for light flicker evaluation in electric power systems

    NASA Astrophysics Data System (ADS)

    Feola, Luigi; Langella, Roberto; Testa, Alfredo

    2015-12-01

    In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.

  18. A simple analytical platform based on thin-layer chromatography coupled with paper-based analytical device for determination of total capsaicinoids in chilli samples.

    PubMed

    Dawan, Phanphruk; Satarpai, Thiphol; Tuchinda, Patoomratana; Shiowatana, Juwadee; Siripinyanond, Atitaya

    2017-01-01

    A new analytical platform based on the use of thin-layer chromatography (TLC) coupled with paper-based analytical device (PAD) was developed for the determination of total capsaicinoids in chilli samples. This newly developed TLC-PAD is simple and low-cost without any requirement of special instrument or skillful person. The analysis consisted of two steps, i.e., extraction of capsaicinoids from chilli samples by using ethanol as solvent and separation of capsaicinoids by thin-layer chromatography (TLC) and elution of capsaicinoids from the TLC plate with in situ colorimetric detection of capsaicinoids on the PAD. For colorimetric detection, Folin-Ciocalteu reagent was used to detect phenolic functional group of capsaicinoids yielding the blue color. The blue color on the PAD was imaged by a scanner followed by evaluation of its grayscale intensity value by ImageJ program. This newly developed TLC-PAD method provided a linear range from 50 to 1000mgL -1 capsaicinoids with the limit of detection as low as 50mgL -1 capsaicinoids. The proposed method was applied to determine capsaicinoids in dried chilli and seasoning powder samples and the results were in good agreement with those obtained by HPLC method. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Analytic Neutrino Oscillation Probabilities in Matter: Revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parke, Stephen J.; Denton, Peter B.; Minakata, Hisakazu

    2018-01-02

    We summarize our recent paper on neutrino oscillation probabilities in matter, explaining the importance, relevance and need for simple, highly accurate approximations to the neutrino oscillation probabilities in matter.

  20. Finite Element Analysis of Simple Rectangular Microstrip Sensor for Determination Moisture Content of Hevea Rubber Latex

    NASA Astrophysics Data System (ADS)

    Yahaya, NZ; Ramli, MR; Razak, NNANA; Abbas, Z.

    2018-04-01

    The Finite Element Method, FEM has been successfully used to model a simple rectangular microstrip sensor to determine the moisture content of Hevea rubber latex. The FEM simulation of sensor and samples was implemented by using COMSOL Multiphysics software. The simulation includes the calculation of magnitude and phase of reflection coefficient and was compared to analytical method. The results show a good agreement in finding the magnitude and phase of reflection coefficient when compared with analytical results. Field distributions of both the unloaded sensor as well as the sensor loaded with different percentages of moisture content were visualized using FEM in conjunction with COMSOL software. The higher the amount of moisture content in the sample the more the electric loops were observed.

  1. Conditions and Linear Stability Analysis at the Transition to Synchronization of Three Coupled Phase Oscillators in a Ring

    NASA Astrophysics Data System (ADS)

    El-Nashar, Hassan F.

    2017-06-01

    We consider a system of three nonidentical coupled phase oscillators in a ring topology. We explore the conditions that must be satisfied in order to obtain the phases at the transition to a synchrony state. These conditions lead to the correct mathematical expressions of phases that aid to find a simple analytic formula for critical coupling when the oscillators transit to a synchronization state having a common frequency value. The finding of a simple expression for the critical coupling allows us to perform a linear stability analysis at the transition to the synchronization stage. The obtained analytic forms of the eigenvalues show that the three coupled phase oscillators with periodic boundary conditions transit to a synchrony state when a saddle-node bifurcation occurs.

  2. Prediction of thermal cycling induced matrix cracking

    NASA Technical Reports Server (NTRS)

    Mcmanus, Hugh L.

    1992-01-01

    Thermal fatigue has been observed to cause matrix cracking in laminated composite materials. A method is presented to predict transverse matrix cracks in composite laminates subjected to cyclic thermal load. Shear lag stress approximations and a simple energy-based fracture criteria are used to predict crack densities as a function of temperature. Prediction of crack densities as a function of thermal cycling is accomplished by assuming that fatigue degrades the material's inherent resistance to cracking. The method is implemented as a computer program. A simple experiment provides data on progressive cracking of a laminate with decreasing temperature. Existing data on thermal fatigue is also used. Correlations of the analytical predictions to the data are very good. A parametric study using the analytical method is presented which provides insight into material behavior under cyclical thermal loads.

  3. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  4. Quality-assurance results for field pH and specific-conductance measurements, and for laboratory analysis, National Atmospheric Deposition Program and National Trends Network; January 1980-September 1984

    USGS Publications Warehouse

    Schroder, L.J.; Brooks, M.H.; Malo, B.A.; Willoughby, T.C.

    1986-01-01

    Five intersite comparison studies for the field determination of pH and specific conductance, using simulated-precipitation samples, were conducted by the U.S.G.S. for the National Atmospheric Deposition Program and National Trends Network. These comparisons were performed to estimate the precision of pH and specific conductance determinations made by sampling-site operators. Simulated-precipitation samples were prepared from nitric acid and deionized water. The estimated standard deviation for site-operator determination of pH was 0.25 for pH values ranging from 3.79 to 4.64; the estimated standard deviation for specific conductance was 4.6 microsiemens/cm at 25 C for specific-conductance values ranging from 10.4 to 59.0 microsiemens/cm at 25 C. Performance-audit samples with known analyte concentrations were prepared by the U.S.G.S.and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The differences between the National Atmospheric Deposition Program and national Trends Network-reported analyte concentrations and known analyte concentrations were calculated, and the bias and precision were determined. For 1983, concentrations of calcium, magnesium, sodium, and chloride were biased at the 99% confidence limit; concentrations of potassium and sulfate were unbiased at the 99% confidence limit. Four analytical laboratories routinely analyzing precipitation were evaluated in their analysis of identical natural- and simulated precipitation samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple-range test on data produced by these laboratories, from the analysis of identical simulated-precipitation samples. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Interlaboratory comparability results may be used to normalize natural-precipitation chemistry data obtained from two or more of these laboratories. (Author 's abstract)

  5. User's guide to the Radiometric Age Data Bank (RADB)

    USGS Publications Warehouse

    Zartman, Robert Eugene; Cole, James C.; Marvin, Richard F.

    1976-01-01

    The Radiometric Age Data Bank (RADB) has been established by the U.S. Geological Survey, as a means for collecting and organizing the estimated 100,000 radiometric ages presently published for the United States. RADB has been constructed such that a complete sample description (location, rock type, etc.), literature citation, and extensive analytical data are linked to form an independent record for each sample reported in a published work. Analytical data pertinent to the potassium-argon, rubidium-strontium, uranium-thorium-lead, lead-alpha, and fission-track methods can be accommodated, singly or in combinations, for each record. Data processing is achieved using the GIPSY program (University of Oklahoma) which maintains the data file and builds, updates, searches, and prints the records using simple yet versatile command statements. Searching and selecting records is accomplished by specifying the presence, absence, or (numeric or alphabetic) value of any element of information in the data bank, and these specifications can be logically linked to develop sophisticated searching strategies. Output is available in the form of complete data records, abbreviated tests, or columnar tabulations. Samples of data-reporting forms, GIPSY command statements, output formats, and data records are presented to illustrate the comprehensive nature and versatility of the Radiometric Age Data Bank.

  6. Analysis of recovery efficiency in high-temperature aquifer thermal energy storage: a Rayleigh-based method

    NASA Astrophysics Data System (ADS)

    Schout, Gilian; Drijver, Benno; Gutierrez-Neri, Mariene; Schotting, Ruud

    2014-01-01

    High-temperature aquifer thermal energy storage (HT-ATES) is an important technique for energy conservation. A controlling factor for the economic feasibility of HT-ATES is the recovery efficiency. Due to the effects of density-driven flow (free convection), HT-ATES systems applied in permeable aquifers typically have lower recovery efficiencies than conventional (low-temperature) ATES systems. For a reliable estimation of the recovery efficiency it is, therefore, important to take the effect of density-driven flow into account. A numerical evaluation of the prime factors influencing the recovery efficiency of HT-ATES systems is presented. Sensitivity runs evaluating the effects of aquifer properties, as well as operational variables, were performed to deduce the most important factors that control the recovery efficiency. A correlation was found between the dimensionless Rayleigh number (a measure of the relative strength of free convection) and the calculated recovery efficiencies. Based on a modified Rayleigh number, two simple analytical solutions are proposed to calculate the recovery efficiency, each one covering a different range of aquifer thicknesses. The analytical solutions accurately reproduce all numerically modeled scenarios with an average error of less than 3 %. The proposed method can be of practical use when considering or designing an HT-ATES system.

  7. Assessment of regional air quality by a concentration-dependent Pollution Permeation Index

    PubMed Central

    Liang, Chun-Sheng; Liu, Huan; He, Ke-Bin; Ma, Yong-Liang

    2016-01-01

    Although air quality monitoring networks have been greatly improved, interpreting their expanding data in both simple and efficient ways remains challenging. Therefore, needed are new analytical methods. We developed such a method based on the comparison of pollutant concentrations between target and circum areas (circum comparison for short), and tested its applications by assessing the air pollution in Jing-Jin-Ji, Yangtze River Delta, Pearl River Delta and Cheng-Yu, China during 2015. We found the circum comparison can instantly judge whether a city is a pollution permeation donor or a pollution permeation receptor by a Pollution Permeation Index (PPI). Furthermore, a PPI-related estimated concentration (original concentration plus halved average concentration difference) can be used to identify some overestimations and underestimations. Besides, it can help explain pollution process (e.g., Beijing’s PM2.5 maybe largely promoted by non-local SO2) though not aiming at it. Moreover, it is applicable to any region, easy-to-handle, and able to boost more new analytical methods. These advantages, despite its disadvantages in considering the whole process jointly influenced by complex physical and chemical factors, demonstrate that the PPI based circum comparison can be efficiently used in assessing air pollution by yielding instructive results, without the absolute need for complex operations. PMID:27731344

  8. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Anderson, M. R.; Schmidt, D. K.

    1986-01-01

    In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.

  9. Application of adjusted data in calculating fission-product decay energies and spectra

    NASA Astrophysics Data System (ADS)

    George, D. C.; Labauve, R. J.; England, T. R.

    1982-06-01

    The code ADENA, which approximately calculates fussion-product beta and gamma decay energies and spectra in 19 or fewer energy groups from a mixture of U235 and Pu239 fuels, is described. The calculation uses aggregate, adjusted data derived from a combination of several experiments and summation results based on the ENDF/B-V fission product file. The method used to obtain these adjusted data and the method used by ADENA to calculate fission-product decay energy with an absorption correction are described, and an estimate of the uncertainty of the ADENA results is given. Comparisons of this approximate method are made to experimental measurements, to the ANSI/ANS 5.1-1979 standard, and to other calculational methods. A listing of the complete computer code (ADENA) is contained in an appendix. Included in the listing are data statements containing the adjusted data in the form of parameters to be used in simple analytic functions.

  10. Analysis of neutron propagation from the skyshine port of a fusion neutron source facility

    NASA Astrophysics Data System (ADS)

    Wakisaka, M.; Kaneko, J.; Fujita, F.; Ochiai, K.; Nishitani, T.; Yoshida, S.; Sawamura, T.

    2005-12-01

    The process of neutron leaking from a 14 MeV neutron source facility was analyzed by calculations and experiments. The experiments were performed at the Fusion Neutron Source (FNS) facility of the Japan Atomic Energy Institute, Tokai-mura, Japan, which has a port on the roof for skyshine experiments, and a 3He counter surrounded with a polyethylene moderator of different thicknesses was used to estimate the energy spectra and dose distributions. The 3He counter with a 3-cm-thick moderator was also used for dose measurements, and the doses evaluated by the counter counts and the calculated count-to-dose conversion factor agreed with the calculations to within ˜30%. The dose distribution was found to fit a simple analytical expression, D(r)=Q{exp(-r/λD)}/{r} and the parameters Q and λD are discussed.

  11. Adaptive method for electron bunch profile prediction

    DOE PAGES

    Scheinker, Alexander; Gessner, Spencer

    2015-10-15

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. Thus, the simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrialmore » control system. Finally, the main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.« less

  12. In-pile measurement of the thermal conductivity of irradiated metallic fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, T.H.; Holland, J.W.

    Transient test data and posttest measurements from recent in-pile overpower transient experiments are used for an in situ determination of metallic fuel thermal conductivity. For test pins that undergo melting but remain intact, a technique is described that relates fuel thermal conductivity to peak pin power during the transient and a posttest measured melt radius. Conductivity estimates and their uncertainty are made for a database of four irradiated Integral Fast Reactor-type metal fuel pins of relatively low burnup (<3 at.%). In the assessment of results, averages and trends of measured fuel thermal conductivity are correlated to local burnup. Emphasis ismore » placed on the changes of conductivity that take place with burnup-induced swelling and sodium logging. Measurements are used to validate simple empirically based analytical models that describe thermal conductivity of porous media and that are recommended for general thermal analyses of irradiated metallic fuel.« less

  13. Seeking maximum linearity of transfer functions

    NASA Astrophysics Data System (ADS)

    Silva, Filipi N.; Comin, Cesar H.; Costa, Luciano da F.

    2016-12-01

    Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function (theoretical or derived from some real system), identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and a simple situation involving experimental data of a low-power, one-stage class A transistor current amplifier. Such an approach, which has been addressed in terms of transfer functions derived from experimentally obtained characteristic surface, also yielded contributions such as the estimation of local constants of the device, as opposed to typically considered average values. The reported method and results pave the way to several further applications in other types of devices and systems, intelligent control operation, and other areas such as identifying regions of power law behavior.

  14. The economics of bootstrapping space industries - Development of an analytic computer model

    NASA Technical Reports Server (NTRS)

    Goldberg, A. H.; Criswell, D. R.

    1982-01-01

    A simple economic model of 'bootstrapping' industrial growth in space and on the Moon is presented. An initial space manufacturing facility (SMF) is assumed to consume lunar materials to enlarge the productive capacity in space. After reaching a predetermined throughput, the enlarged SMF is devoted to products which generate revenue continuously in proportion to the accumulated output mass (such as space solar power stations). Present discounted value and physical estimates for the general factors of production (transport, capital efficiency, labor, etc.) are combined to explore optimum growth in terms of maximized discounted revenues. It is found that 'bootstrapping' reduces the fractional cost to a space industry of transport off-Earth, permits more efficient use of a given transport fleet. It is concluded that more attention should be given to structuring 'bootstrapping' scenarios in which 'learning while doing' can be more fully incorporated in program analysis.

  15. On the injection of fine dust from the Jovian magnetosphere

    NASA Technical Reports Server (NTRS)

    Maravilla, D.; Flammer, K. R.; Mendis, D. A.

    1995-01-01

    Using a simple aligned dipole model of the Jovian magnetic field, and exploiting integrals of the gravito-electrodynamic equation of motion of charged dust, we obtain an analytic result which characterizes the nature of the orbits of grains of different (fixed) charge-to-mass ratios launched at different velocities from different radial distances from Jupiter. This enables us to consider various possible sources of the dust-streams emanating from Jupiter which have been observed by the Ulysses spacecraft. We conclude that Jupiter's volcanically active satellite Io is the likely source, in agreement with the earlier calculations and simulations of Horanyi et al. using a detailed three-dimensional model of the Jovian magnetosphere. Our estimates of the size range and the velocity range of these dust grains are also in good agreement with those of the above authors and are within the error bars of the observations.

  16. Analysis of a photon assisted field emission device

    NASA Astrophysics Data System (ADS)

    Jensen, K. L.; Lau, Y. Y.; McGregor, D. S.

    2000-07-01

    A field emitter array held at the threshold of emission by a dc gate potential from which current pulses are triggered by the application of a laser pulse on the backside of the semiconductor may produce electron bunches ("density modulation") at gigahertz frequencies. We develop an analytical model of such optically controlled emission from a silicon tip using a modified Wentzel-Kramers-Brillouin and Airy function approach to solving Schrödinger's equation. Band bending and an approximation to the exchange-correlation effects on the image charge potential are included for an array of hyperbolic emitters with a distribution in tip radii and work function. For a simple relationship between the incident photon flux and the resultant electron density at the emission site, an estimation of the tunneling current is made. An example of the operation and design of such a photon-assisted field emission device is given.

  17. Adaptive method for electron bunch profile prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial controlmore » system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Li; Jacobsen, Stein B., E-mail: astrozeng@gmail.com, E-mail: jacobsen@neodymium.harvard.edu

    In the past few years, the number of confirmed planets has grown above 2000. It is clear that they represent a diversity of structures not seen in our own solar system. In addition to very detailed interior modeling, it is valuable to have a simple analytical framework for describing planetary structures. The variational principle is a fundamental principle in physics, entailing that a physical system follows the trajectory, which minimizes its action. It is alternative to the differential equation formulation of a physical system. Applying the variational principle to the planetary interior can beautifully summarize the set of differential equationsmore » into one, which provides us some insight into the problem. From this principle, a universal mass–radius relation, an estimate of the error propagation from the equation of state to the mass–radius relation, and a form of the virial theorem applicable to planetary interiors are derived.« less

  19. The existence and abundance of ghost ancestors in biparental populations.

    PubMed

    Gravel, Simon; Steel, Mike

    2015-05-01

    In a randomly-mating biparental population of size N there are, with high probability, individuals who are genealogical ancestors of every extant individual within approximately log2(N) generations into the past. We use this result of J. Chang to prove a curious corollary under standard models of recombination: there exist, with high probability, individuals within a constant multiple of log2(N) generations into the past who are simultaneously (i) genealogical ancestors of each of the individuals at the present, and (ii) genetic ancestors to none of the individuals at the present. Such ancestral individuals-ancestors of everyone today that left no genetic trace-represent 'ghost' ancestors in a strong sense. In this short note, we use simple analytical argument and simulations to estimate how many such individuals exist in finite Wright-Fisher populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

Top