Sample records for obtaining sufficiently accurate

  1. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  2. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  3. Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses

    PubMed Central

    Myers, Risa B.; Herskovic, Jorge R.

    2011-01-01

    Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our

  4. Glucose Meters: A Review of Technical Challenges to Obtaining Accurate Results

    PubMed Central

    Tonyushkina, Ksenia; Nichols, James H.

    2009-01-01

    , anemia, hypotension, and other disease states. This article reviews the challenges involved in obtaining accurate glucose meter results. PMID:20144348

  5. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  6. Development of a Method to Obtain More Accurate General and Oral Health Related Information Retrospectively

    PubMed Central

    A, Golkari; A, Sabokseir; D, Blane; A, Sheiham; RG, Watt

    2017-01-01

    Statement of Problem: Early childhood is a crucial period of life as it affects one’s future health. However, precise data on adverse events during this period is usually hard to access or collect, especially in developing countries. Objectives: This paper first reviews the existing methods for retrospective data collection in health and social sciences, and then introduces a new method/tool for obtaining more accurate general and oral health related information from early childhood retrospectively. Materials and Methods: The Early Childhood Events Life-Grid (ECEL) was developed to collect information on the type and time of health-related adverse events during the early years of life, by questioning the parents. The validity of ECEL and the accuracy of information obtained by this method were assessed in a pilot study and in a main study of 30 parents of 8 to 11 year old children from Shiraz (Iran). Responses obtained from parents using the final ECEL were compared with the recorded health insurance documents. Results: There was an almost perfect agreement between the health insurance and ECEL data sets (Kappa value=0.95 and p < 0.001). Interviewees remembered the important events more accurately (100% exact timing match in case of hospitalization). Conclusions: The Early Childhood Events Life-Grid method proved to be highly accurate when compared with recorded medical documents. PMID:28959773

  7. Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment

    DTIC Science & Technology

    2015-06-09

    Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4

  8. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, Mark W.; George, William A.

    1987-01-01

    A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.

  9. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  10. Guidelines and techniques for obtaining water samples that accurately represent the water chemistry of an aquifer

    USGS Publications Warehouse

    Claassen, Hans C.

    1982-01-01

    Obtaining ground-water samples that accurately represent the water chemistry of an aquifer is a complex task. Before a ground-water sampling program can be started, an understanding of the kind of chemical data needed and the potential changes in water chemistry resulting from various drilling, well-completion, and sampling techniques is needed. This report provides a basis for such an evaluation and permits a choice of techniques that will result in obtaining the best possible data for the time and money allocated.

  11. On canonical cylinder sections for accurate determination of contact angle in microgravity

    NASA Technical Reports Server (NTRS)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  12. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  13. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  14. Latest Developments on Obtaining Accurate Measurements with Pitot Tubes in ZPG Turbulent Boundary Layers

    NASA Astrophysics Data System (ADS)

    Nagib, Hassan; Vinuesa, Ricardo

    2013-11-01

    Ability of available Pitot tube corrections to provide accurate mean velocity profiles in ZPG boundary layers is re-examined following the recent work by Bailey et al. Measurements by Bailey et al., carried out with probes of diameters ranging from 0.2 to 1.89 mm, together with new data taken with larger diameters up to 12.82 mm, show deviations with respect to available high-quality datasets and hot-wire measurements in the same Reynolds number range. These deviations are significant in the buffer region around y+ = 30 - 40 , and lead to disagreement in the von Kármán coefficient κ extracted from profiles. New forms for shear, near-wall and turbulence corrections are proposed, highlighting the importance of the latest one. Improved agreement in mean velocity profiles is obtained with new forms, where shear and near-wall corrections contribute with around 85%, and remaining 15% of the total correction comes from turbulence correction. Finally, available algorithms to correct wall position in profile measurements of wall-bounded flows are tested, using as benchmark the corrected Pitot measurements with artificially simulated probe shifts and blockage effects. We develop a new scheme, κB - Musker, which is able to accurately locate wall position.

  15. Compensation method for obtaining accurate, sub-micrometer displacement measurements of immersed specimens using electronic speckle interferometry.

    PubMed

    Fazio, Massimo A; Bruno, Luigi; Reynaud, Juan F; Poggialini, Andrea; Downs, J Crawford

    2012-03-01

    We proposed and validated a compensation method that accounts for the optical distortion inherent in measuring displacements on specimens immersed in aqueous solution. A spherically-shaped rubber specimen was mounted and pressurized on a custom apparatus, with the resulting surface displacements recorded using electronic speckle pattern interferometry (ESPI). Point-to-point light direction computation is achieved by a ray-tracing strategy coupled with customized B-spline-based analytical representation of the specimen shape. The compensation method reduced the mean magnitude of the displacement error induced by the optical distortion from 35% to 3%, and ESPI displacement measurement repeatability showed a mean variance of 16 nm at the 95% confidence level for immersed specimens. The ESPI interferometer and numerical data analysis procedure presented herein provide reliable, accurate, and repeatable measurement of sub-micrometer deformations obtained from pressurization tests of spherically-shaped specimens immersed in aqueous salt solution. This method can be used to quantify small deformations in biological tissue samples under load, while maintaining the hydration necessary to ensure accurate material property assessment.

  16. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  17. Exploratory Movement Generates Higher-Order Information That Is Sufficient for Accurate Perception of Scaled Egocentric Distance

    PubMed Central

    Mantel, Bruno; Stoffregen, Thomas A.; Campbell, Alain; Bardy, Benoît G.

    2015-01-01

    Body movement influences the structure of multiple forms of ambient energy, including optics and gravito-inertial force. Some researchers have argued that egocentric distance is derived from inferential integration of visual and non-visual stimulation. We suggest that accurate information about egocentric distance exists in perceptual stimulation as higher-order patterns that extend across optics and inertia. We formalize a pattern that specifies the egocentric distance of a stationary object across higher-order relations between optics and inertia. This higher-order parameter is created by self-generated movement of the perceiver in inertial space relative to the illuminated environment. For this reason, we placed minimal restrictions on the exploratory movements of our participants. We asked whether humans can detect and use the information available in this higher-order pattern. Participants judged whether a virtual object was within reach. We manipulated relations between body movement and the ambient structure of optics and inertia. Judgments were precise and accurate when the higher-order optical-inertial parameter was available. When only optic flow was available, judgments were poor. Our results reveal that participants perceived egocentric distance from the higher-order, optical-inertial consequences of their own exploratory activity. Analysis of participants’ movement trajectories revealed that self-selected movements were complex, and tended to optimize availability of the optical-inertial pattern that specifies egocentric distance. We argue that accurate information about egocentric distance exists in higher-order patterns of ambient energy, that self-generated movement can generate these higher-order patterns, and that these patterns can be detected and used to support perception of egocentric distance that is precise and accurate. PMID:25856410

  18. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  19. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds.

    PubMed

    Hamraz, Hamid; Contreras, Marco A; Zhang, Jun

    2017-07-28

    Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.

  20. Accurate interatomic force fields via machine learning with covariant kernels

    NASA Astrophysics Data System (ADS)

    Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro

    2017-06-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.

  1. Foresight begins with FMEA. Delivering accurate risk assessments.

    PubMed

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  2. Third-Order Incremental Dual-Basis Set Zero-Buffer Approach: An Accurate and Efficient Way To Obtain CCSD and CCSD(T) Energies.

    PubMed

    Zhang, Jun; Dolg, Michael

    2013-07-09

    An efficient way to obtain accurate CCSD and CCSD(T) energies for large systems, i.e., the third-order incremental dual-basis set zero-buffer approach (inc3-db-B0), has been developed and tested. This approach combines the powerful incremental scheme with the dual-basis set method, and along with the new proposed K-means clustering (KM) method and zero-buffer (B0) approximation, can obtain very accurate absolute and relative energies efficiently. We tested the approach for 10 systems of different chemical nature, i.e., intermolecular interactions including hydrogen bonding, dispersion interaction, and halogen bonding; an intramolecular rearrangement reaction; aliphatic and conjugated hydrocarbon chains; three compact covalent molecules; and a water cluster. The results show that the errors for relative energies are <1.94 kJ/mol (or 0.46 kcal/mol), for absolute energies of <0.0026 hartree. By parallelization, our approach can be applied to molecules of more than 30 atoms and more than 100 correlated electrons with high-quality basis set such as cc-pVDZ or cc-pVTZ, saving computational cost by a factor of more than 10-20, compared to traditional implementation. The physical reasons of the success of the inc3-db-B0 approach are also analyzed.

  3. Rapid and accurate peripheral nerve detection using multipoint Raman imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro

    2017-02-01

    Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.

  4. Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany; Gallo, Giulia; Brinkman, Gregory

    Revenue insufficiency, or the missing money problem, occurs when the revenues that generators earn from the market are not sufficient to cover both fixed and variable costs to remain in the market and/or justify investments in new capacity, which may be needed for reliability. The near-zero marginal cost of variable renewable generators further exacerbates these revenue challenges. Estimating the extent of the missing money problem in current electricity markets is an important, nontrivial task that requires representing both how the power system operates and how market participants behave. This paper explores the missing money problem using a production cost modelmore » that represented a simplified version of the Electric Reliability Council of Texas (ERCOT) energy-only market for the years 2012-2014. We evaluate how various market structures -- including market behavior, ancillary services, and changing fleet compositions -- affect net revenues in this ERCOT-like system. In most production cost modeling exercises, resources are assumed to offer their marginal capabilities at marginal costs. Although this assumption is reasonable for feasibility studies and long-term planning, it does not adequately consider the market behaviors that impact revenue sufficiency. In this work, we simulate a limited set of market participant strategic bidding behaviors by means of different sets of markups; these markups are applied to the true production costs of all gas generators, which are the most prominent generators in ERCOT. Results show that markups can help generators increase their net revenues overall, although net revenues may increase or decrease depending on the technology and the year under study. Results also confirm that conventional, variable-cost-based production cost simulations do not capture prices accurately, and this particular feature calls for proxies for strategic behaviors (e.g., markups) and more accurate representations of how electricity markets

  5. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    PubMed Central

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  6. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  7. When Sufficiently Processed, Semantically Related Distractor Pictures Hamper Picture Naming.

    PubMed

    Matushanskaya, Asya; Mädebach, Andreas; Müller, Matthias M; Jescheniak, Jörg D

    2016-11-01

    Prominent speech production models view lexical access as a competitive process. According to these models, a semantically related distractor picture should interfere with target picture naming more strongly than an unrelated one. However, several studies failed to obtain such an effect. Here, we demonstrate that semantic interference is obtained, when the distractor picture is sufficiently processed. Participants named one of two pictures presented in close temporal succession, with color cueing the target. Experiment 1 induced the prediction that the target appears first. When this prediction was violated (distractor first), semantic interference was observed. Experiment 2 ruled out that the time available for distractor processing was the driving force. These results show that semantically related distractor pictures interfere with the naming response when they are sufficiently processed. The data thus provide further support for models viewing lexical access as a competitive process.

  8. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  9. Pharmacists' knowledge and the difficulty of obtaining emergency contraception.

    PubMed

    Bennett, Wendy; Petraitis, Carol; D'Anella, Alicia; Marcella, Stephen

    2003-10-01

    This cross-sectional study was performed to examine knowledge and attitudes among pharmacists about emergency contraception (EC) and determine the factors associated with their provision of EC. A random systematic sampling method was used to obtain a sample (N = 320) of pharmacies in Pennsylvania. A "mystery shopper" telephone survey method was utilized. Only 35% of pharmacists stated that they would be able to fill a prescription for EC that day. Also, many community pharmacists do not have sufficient or accurate information about EC. In a logistic regression model, pharmacists' lack of information relates to the low proportion of pharmacists able to dispense it. In conclusion, access to EC from community pharmacists in Pennsylvania is severely limited. Interventions to improve timely access to EC involve increased education for pharmacists, as well as increased community request for these products as an incentive for pharmacists to stock them.

  10. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  11. FNA, core biopsy, or both for the diagnosis of lung carcinoma: Obtaining sufficient tissue for a specific diagnosis and molecular testing.

    PubMed

    Coley, Shana M; Crapanzano, John P; Saqi, Anjali

    2015-05-01

    Increasingly, minimally invasive procedures are performed to assess lung lesions and stage lung carcinomas. In cases of advanced-stage lung cancer, the biopsy may provide the only diagnostic tissue. The aim of this study was to determine which method-fine-needle aspiration (FNA), core biopsy (CBx), or both (B)--is optimal for providing sufficient tissue for rendering a specific diagnosis and pursuing molecular studies for guiding tumor-specific treatment. A search was performed for computed tomography-guided lung FNA, CBx, or B cases with rapid onsite evaluation. Carcinomas were assessed for the adequacy to render a specific diagnosis; this was defined as enough refinement to subtype a primary carcinoma or to assess a metastatic origin morphologically and/or immunohistochemically. In cases of primary lung adenocarcinoma, the capability of each modality to yield sufficient tissue for molecular studies (epidermal growth factor receptor, KRAS, or anaplastic lymphoma kinase) was also assessed. There were 210 cases, and 134 represented neoplasms, including 115 carcinomas. For carcinomas, a specific diagnosis was reached in 89% of FNA cases (33 of 37), 98% of CBx cases (43 of 44), and 100% of B cases (34 of 34). For primary lung adenocarcinomas, adequate tissue remained to perform molecular studies in 94% of FNA cases (16 of 17), 100% of CBx cases (19 of 19), and 86% of B cases (19 of 22). No statistical difference was found among the modalities for either reaching a specific diagnosis (p = .07, Fisher exact test) or providing sufficient tissue for molecular studies (p = .30, Fisher exact test). The results suggest that FNA, CBx, and B are comparable for arriving at a specific diagnosis and having sufficient tissue for molecular studies: they specifically attained the diagnostic and prognostic goals of minimally invasive procedures for lung carcinoma. © 2015 American Cancer Society.

  12. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can

  13. 48 CFR 9.105-1 - Obtaining information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Obtaining information. 9... information. (a) Before making a determination of responsibility, the contracting officer shall possess or obtain information sufficient to be satisfied that a prospective contractor currently meets the...

  14. Purification of pharmaceutical preparations using thin-layer chromatography to obtain mass spectra with Direct Analysis in Real Time and accurate mass spectrometry.

    PubMed

    Wood, Jessica L; Steiner, Robert R

    2011-06-01

    Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  16. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  17. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  18. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.

    1977-01-01

    A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.

  19. Sequence-based screening for self-sufficient P450 monooxygenase from a metagenome library.

    PubMed

    Kim, B S; Kim, S Y; Park, J; Park, W; Hwang, K Y; Yoon, Y J; Oh, W K; Kim, B Y; Ahn, J S

    2007-05-01

    Cytochrome P450 monooxygenases (CYPs) are useful catalysts for oxidation reactions. Self-sufficient CYPs harbour a reductive domain covalently connected to a P450 domain and are known for their robust catalytic activity with great potential as biocatalysts. In an effort to expand genetic sources of self-sufficient CYPs, we devised a sequence-based screening system to identify them in a soil metagenome. We constructed a soil metagenome library and performed sequence-based screening for self-sufficient CYP genes. A new CYP gene, syk181, was identified from the metagenome library. Phylogenetic analysis revealed that SYK181 formed a distinct phylogenic line with 46% amino-acid-sequence identity to CYP102A1 which has been extensively studied as a fatty acid hydroxylase. The heterologously expressed SYK181 showed significant hydroxylase activity towards naphthalene and phenanthrene as well as towards fatty acids. Sequence-based screening of metagenome libraries is expected to be a useful approach for searching self-sufficient CYP genes. The translated product of syk181 shows self-sufficient hydroxylase activity towards fatty acids and aromatic compounds. SYK181 is the first self-sufficient CYP obtained directly from a metagenome library. The genetic and biochemical information on SYK181 are expected to be helpful for engineering self-sufficient CYPs with broader catalytic activities towards various substrates, which would be useful for bioconversion of natural products and biodegradation of organic chemicals.

  20. Redox self-sufficient whole cell biotransformation for amination of alcohols.

    PubMed

    Klatte, Stephanie; Wendisch, Volker F

    2014-10-15

    Whole cell biotransformation is an upcoming tool to replace common chemical routes for functionalization and modification of desired molecules. In the approach presented here the production of various non-natural (di)amines was realized using the designed whole cell biocatalyst Escherichia coli W3110/pTrc99A-ald-adh-ta with plasmid-borne overexpression of genes for an l-alanine dehydrogenase, an alcohol dehydrogenase and a transaminase. Cascading alcohol oxidation with l-alanine dependent transamination and l-alanine dehydrogenase allowed for redox self-sufficient conversion of alcohols to the corresponding amines. The supplementation of the corresponding (di)alcohol precursors as well as amino group donor l-alanine and ammonium chloride were sufficient for amination and redox cofactor recycling in a resting buffer system. The addition of the transaminase cofactor pyridoxal-phosphate and the alcohol dehydrogenase cofactor NAD(+) was not necessary to obtain complete conversion. Secondary and cyclic alcohols, for example, 2-hexanol and cyclohexanol were not aminated. However, efficient redox self-sufficient amination of aliphatic and aromatic (di)alcohols in vivo was achieved with 1-hexanol, 1,10-decanediol and benzylalcohol being aminated best. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  2. Do short international layovers allow sufficient opportunity for pilots to recover?

    PubMed

    Lamond, Nicole; Petrilli, Renée M; Dawson, Drew; Roach, Gregory D

    2006-01-01

    For Australian pilots, short layovers (<40 h) are a feature of many international patterns. However, anecdotal reports suggest that flight crew members find patterns with short slips more fatiguing than those with a longer international layover, as they restrict the opportunity to obtain sufficient sleep. The current study aimed to determine whether pilots operating international patterns with short layovers have sufficient opportunity to recover prior to the inbound flight. Nineteen international pilots (ten captains, nine first officers) operating a direct return pattern from Australia to Los Angeles (LAX) with a short (n = 9) 9+/-0.8 h (mean+/-S.D) or long (n = 10) 62.2+/-0.9 h LAX layover wore an activity monitor and kept a sleep/duty diary during the pattern. Immediately before and after each flight, pilots completed a 5 min PalmPilot-based psychomotor vigilance task (Palm-PVT). Flights were of comparable duration outbound (3.5+/-0.6 h) and inbound (14.3+/-0.6 h) and timing. The amount of sleep obtained in-flight did not significantly vary as a function of layover length. However, pilots obtained significantly more sleep during the inbound (3.7+/-0.8 h) than the outbound flight (2.2+/-0.8 h). Pilots with the shorter layover obtained significantly less sleep in total during layover (14.0+/-2.7 h vs. 19.6+/-2.5), due to significantly fewer sleep periods (3.0+/-0.7 vs. 4.0+/-0.9). However, neither mean sleep duration nor the sleep obtained in the 24 h prior to the inbound flight significantly differed as a function of layover length. Response speed significantly varied across the pattern, and a significant interaction was also observed. For pilots with a short layover, response speed was significantly slower at the end of both the outbound and inbound flight, and prior to the inbound flight (i.e., at the end of layover), relative to response speed at the start of the pattern (pre-trip). Similarly, response speed for the longer layover was slower at the end of the

  3. Linear stable unity-feedback system - Necessary and sufficient conditions for stability under nonlinear plant perturbations

    NASA Technical Reports Server (NTRS)

    Desoer, C. A.; Kabuli, M. G.

    1989-01-01

    The authors consider a linear (not necessarily time-invariant) stable unity-feedback system, where the plant and the compensator have normalized right-coprime factorizations. They study two cases of nonlinear plant perturbations (additive and feedback), with four subcases resulting from: (1) allowing exogenous input to Delta P or not; 2) allowing the observation of the output of Delta P or not. The plant perturbation Delta P is not required to be stable. Using the factorization approach, the authors obtain necessary and sufficient conditions for all cases in terms of two pairs of nonlinear pseudostate maps. Simple physical considerations explain the form of these necessary and sufficient conditions. Finally, the authors obtain the characterization of all perturbations Delta P for which the perturbed system remains stable.

  4. Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.

    PubMed

    Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter

    2015-01-01

    To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.

  5. CaFe2O4 as a self-sufficient solar energy converter

    NASA Astrophysics Data System (ADS)

    Tablero, C.

    2017-10-01

    An ideal solar energy to electricity or fuel converter should work without the use of any external bias potential. An analysis of self-sufficiency when CaFe2O4 is used to absorb the sunlight is carried out based on the CaFe2O4 absorption coefficient. We started to obtain this coefficient theoretically within the experimental bandgap range in order to fix the interval of possible values of photocurrents, maximum absorption efficiencies, and photovoltages and thus that of self-sufficiency considering only the radiative processes. Also for single-gap CaFe2O4, we evaluate an alternative for increasing the photocurrent and maximum absorption efficiency based on inserting an intermediate band using high doping or alloying.

  6. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.

  7. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  8. Sufficiency of Mesolimbic Dopamine Neuron Stimulation for the Progression to Addiction.

    PubMed

    Pascoli, Vincent; Terrier, Jean; Hiver, Agnès; Lüscher, Christian

    2015-12-02

    The factors causing the transition from recreational drug consumption to addiction remain largely unknown. It has not been tested whether dopamine (DA) is sufficient to trigger this process. Here we use optogenetic self-stimulation of DA neurons of the ventral tegmental area (VTA) to selectively mimic the defining commonality of addictive drugs. All mice readily acquired self-stimulation. After weeks of abstinence, cue-induced relapse was observed in parallel with a potentiation of excitatory afferents onto D1 receptor-expressing neurons of the nucleus accumbens (NAc). When the mice had to endure a mild electric foot shock to obtain a stimulation, some stopped while others persevered. The resistance to punishment was associated with enhanced neural activity in the orbitofrontal cortex (OFC) while chemogenetic inhibition of the OFC reduced compulsivity. Together, these results show that stimulating VTA DA neurons induces behavioral and cellular hallmarks of addiction, indicating sufficiency for the induction and progression of the disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Self-sufficiency, free trade and safety.

    PubMed

    Rautonen, Jukka

    2010-01-01

    The relationship between free trade, self-sufficiency and safety of blood and blood components has been a perennial discussion topic in the blood service community. Traditionally, national self-sufficiency has been perceived as the ultimate goal that would also maximize safety. However, very few countries are, or can be, truly self-sufficient when self-sufficiency is understood correctly to encompass the whole value chain from the blood donor to the finished product. This is most striking when plasma derived medicines are considered. Free trade of blood products, or competition, as such can have a negative or positive effect on blood safety. Further, free trade of equipment and reagents and several plasma medicines is actually necessary to meet the domestic demand for blood and blood derivatives in most countries. Opposing free trade due to dogmatic reasons is not in the best interest of any country and will be especially harmful for the developing world. Competition between blood services in the USA has been present for decades. The more than threefold differences in blood product prices between European blood services indicate that competition is long overdue in Europe, too. This competition should be welcomed but carefully and proactively regulated to avoid putting safe and secure blood supply at risk. Copyright 2009 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.

  10. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-12-01

    Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.

  11. Working toward Self-Sufficiency.

    ERIC Educational Resources Information Center

    Caplan, Nathan

    1985-01-01

    Upon arrival in the United States, the Southeast Asian "Boat People" faced a multitude of problems that would seem to have hindered their achieving economic self-sufficiency. Nonetheless, by the time of a 1982 research study which interviewed nearly 1,400 refugee households, 25 percent of all the households in the sample had achieved…

  12. 39 CFR 491.3 - Sufficient legal form.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Sufficient legal form. 491.3 Section 491.3 Postal... AND THE POSTAL RATE COMMISSION § 491.3 Sufficient legal form. No document purporting to garnish... is legal process in the nature of garnishment; that it is issued by a court of competent jurisdiction...

  13. 39 CFR 491.3 - Sufficient legal form.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Sufficient legal form. 491.3 Section 491.3 Postal... AND THE POSTAL RATE COMMISSION § 491.3 Sufficient legal form. No document purporting to garnish... is legal process in the nature of garnishment; that it is issued by a court of competent jurisdiction...

  14. 27 CFR 25.174 - Bond not sufficient.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Bond not sufficient. 25.174 Section 25.174 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS BEER Tax on Beer Prepayment of Tax § 25.174 Bond not sufficient. When the...

  15. 27 CFR 25.174 - Bond not sufficient.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Bond not sufficient. 25.174 Section 25.174 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL BEER Tax on Beer Prepayment of Tax § 25.174 Bond not sufficient. When the...

  16. 27 CFR 25.174 - Bond not sufficient.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Bond not sufficient. 25.174 Section 25.174 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL BEER Tax on Beer Prepayment of Tax § 25.174 Bond not sufficient. When the...

  17. 27 CFR 25.174 - Bond not sufficient.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Bond not sufficient. 25.174 Section 25.174 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS BEER Tax on Beer Prepayment of Tax § 25.174 Bond not sufficient. When the...

  18. 27 CFR 25.174 - Bond not sufficient.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Bond not sufficient. 25.174 Section 25.174 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS BEER Tax on Beer Prepayment of Tax § 25.174 Bond not sufficient. When the...

  19. When is Testing Sufficient

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush

    1999-01-01

    The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.

  20. 29 CFR 1403.3 - Obtaining data on labor-management disputes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Obtaining data on labor-management disputes. 1403.3 Section... FUNCTIONS AND DUTIES § 1403.3 Obtaining data on labor-management disputes. When the existence of a labor... information to determine if the Service should proffer its services under its policies. If sufficient data on...

  1. Improving Angles-Only Navigation Performance by Selecting Sufficiently Accurate Accelerometers

    DTIC Science & Technology

    2009-08-01

    controller for thrusters and a PID controller for momentum Wheels. Translational control leverages a PD controller for station keeping, and Clohessy ... Wiltshire (CW) equations targeting for trans- fers. Navigation is detailed in Section III.A. III.A. Kalman Filter Development A Square-Root EKF is

  2. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  3. Guess LOD approach: sufficient conditions for robustness.

    PubMed

    Williamson, J A; Amos, C I

    1995-01-01

    Analysis of genetic linkage between a disease and a marker locus requires specifying a genetic model describing both the inheritance pattern and the gene frequencies of the marker and trait loci. Misspecification of the genetic model is likely for etiologically complex diseases. In previous work we have shown through analytic studies that misspecifying the genetic model for disease inheritance does not lead to excess false-positive evidence for genetic linkage provided the genetic marker alleles of all pedigree members are known, or can be inferred without bias from the data. Here, under various selection or ascertainment schemes we extend these previous results to situations in which the genetic model for the marker locus may be incorrect. We provide sufficient conditions for the asymptotic unbiased estimation of the recombination fraction under the null hypothesis of no linkage, and also conditions for the limiting distribution of the likelihood ratio test for no linkage to be chi-squared. Through simulation studies we document some situations under which asymptotic bias can result when the genetic model is misspecified. Among those situations under which an excess of false-positive evidence for genetic linkage can be generated, the most common is failure to provide accurate estimates of the marker allele frequencies. We show that in most cases false-positive evidence for genetic linkage is unlikely to result solely from the misspecification of the genetic model for disease or trait inheritance.

  4. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  5. The accurate assessment of small-angle X-ray scattering data

    DOE PAGES

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  6. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  7. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  8. Sufficient condition for a finite-time singularity in a high-symmetry Euler flow: Analysis and statistics

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    1996-08-01

    A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.

  9. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  10. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  11. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  12. Partners in Self-Sufficiency Guidebook.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.

    This guidebook is for community leaders who are implementing the Federal Partners in Self-Sufficiency (PS-S) program, a community-based approach to service delivery that helps families get off welfare. The program offers a comprehensive package of services including housing, education, child care, transportation, counseling, and job training and…

  13. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.

  14. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops.

    PubMed

    Bengochea-Guevara, José M; Andújar, Dionisio; Sanchez-Sardana, Francisco L; Cantuña, Karla; Ribeiro, Angela

    2017-12-24

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, "on ground crop inspection" potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. "On ground monitoring" is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows.

  15. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops

    PubMed Central

    Andújar, Dionisio; Sanchez-Sardana, Francisco L.; Cantuña, Karla

    2017-01-01

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, “on ground crop inspection” potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. “On ground monitoring” is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows. PMID:29295536

  16. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  17. Effector Gene Suites in Some Soil Isolates of Fusarium oxysporum Are Not Sufficient Predictors of Vascular Wilt in Tomato.

    PubMed

    Jelinski, Nicolas A; Broz, Karen; Jonkers, Wilfried; Ma, Li-Jun; Kistler, H Corby

    2017-07-01

    Seventy-four Fusarium oxysporum soil isolates were assayed for known effector genes present in an F. oxysporum f. sp. lycopersici race 3 tomato wilt strain (FOL MN-25) obtained from the same fields in Manatee County, Florida. Based on the presence or absence of these genes, four haplotypes were defined, two of which represented 96% of the surveyed isolates. These two most common effector haplotypes contained either all or none of the assayed race 3 effector genes. We hypothesized that soil isolates with all surveyed effector genes, similar to FOL MN-25, would be pathogenic toward tomato, whereas isolates lacking all effectors would be nonpathogenic. However, inoculation experiments revealed that presence of the effector genes alone was not sufficient to ensure pathogenicity on tomato. Interestingly, a nonpathogenic isolate containing the full suite of unmutated effector genes (FOS 4-4) appears to have undergone a chromosomal rearrangement yet remains vegetatively compatible with FOL MN-25. These observations confirm the highly dynamic nature of the F. oxysporum genome and support the conclusion that pathogenesis among free-living populations of F. oxysporum is a complex process. Therefore, the presence of effector genes alone may not be an accurate predictor of pathogenicity among soil isolates of F. oxysporum.

  18. 'Necessary and sufficient' in biology is not necessarily necessary - confusions and erroneous conclusions resulting from misapplied logic in the field of biology, especially neuroscience.

    PubMed

    Yoshihara, Motojiro; Yoshihara, Motoyuki

    In this article, we describe an incorrect use of logic which involves the careless application of the 'necessary and sufficient' condition originally used in formal logic. This logical fallacy is causing frequent confusion in current biology, especially in neuroscience. In order to clarify this problem, we first dissect the structure of this incorrect logic (which we refer to as 'misapplied-N&S') to show how necessity and sufficiency in misapplied-N&S are not matching each other. Potential pitfalls of utilizing misapplied-N&S are exemplified by cases such as the discrediting of command neurons and other potentially key neurons, the distorting of truth in optogenetic studies, and the wrongful justification of studies with little meaning. In particular, the use of the word 'sufficient' in optogenetics tends to generate misunderstandings by opening up multiple interpretations. To avoid the confusion caused by the misleading logic, we now recommend using 'indispensable and inducing' instead of using 'necessary and sufficient.' However, we ultimately recommend fully articulating the limits of what our experiments suggest, not relying on such simple phrases. Only after this problem is fully understood and more rigorous language is demanded, can we finally interpret experimental results in an accurate way.

  19. Sufficient and necessary condition of separability for generalized Werner states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng Dongling; Chen Jingling

    2009-02-15

    In a celebrated paper [Optics Communications 179, 447, 2000], A.O. Pittenger and M.H. Rubin presented for the first time a sufficient and necessary condition of separability for the generalized Werner states. Inspired by their ideas, we generalized their method to a more general case. We obtain a sufficient and necessary condition for the separability of a specific class of N d-dimensional system (qudits) states, namely special generalized Werner state (SGWS): W{sup [d{sup N}]}(v)=(1-v)(I{sup (N)})/(d{sup N}) +v|{psi}{sub d}{sup N}><{psi}{sub d}{sup N}|, where |{psi}{sub d}{sup N}>={sigma}{sub i=0}{sup d-1}{alpha}{sub i}|i...i> is an entangled pure state of N qudits system and {alpha}{sub i} satisfiesmore » two restrictions: (i) {sigma}{sub i=0}{sup d-1}{alpha}{sub i}{alpha}{sub i}*=1; (ii) Matrix 1/d (I{sup (1)}+T{sigma}{sub i{ne}}{sub j}{alpha}{sub i}|i>

  20. Breast Volume Measurement by Recycling the Data Obtained From 2 Routine Modalities, Mammography and Magnetic Resonance Imaging.

    PubMed

    Itsukage, Shizu; Sowa, Yoshihiro; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki

    2017-01-01

    Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes' principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging.

  1. Breast Volume Measurement by Recycling the Data Obtained From 2 Routine Modalities, Mammography and Magnetic Resonance Imaging

    PubMed Central

    Itsukage, Shizu; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki

    2017-01-01

    Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes’ principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging. PMID:29308107

  2. [Vitamin-antioxidant sufficiency of winter sports athletes].

    PubMed

    Beketova, N A; Kosheleva, O V; Pereverzeva, O G; Vrzhesinskaia, O A; Kodentsova, V M; Solntseva, T N; Khanfer'ian, R A

    2013-01-01

    The sufficiency of 169 athletes (six disciplines: bullet shooting, biathlon, bobsleigh, skeleton, freestyle skiing, snowboarding) with vitamins A, E, C, B2, and beta-carotene has been investigated in April-September 2013. All athletes (102 juniors, mean age--18.5 +/- 0.3 years, and 67 adult high-performance athletes, mean age--26.8 +/- 0.7 years) were sufficiently supplied with vitamin A (70.7 +/- 1.7 mcg/dl). Mean blood serum retinol level was 15% higher the upper limit of the norm (80 mcg/dl) in biathletes while median reached 90.9 mcg/dl. Blood serum level of tocopherols (1.22 +/- 0.03 mg/dl), ascorbic acid (1.06 +/- 0.03 mg/dl), riboflavin (7.1 +/- 0.4 ng/ml), and beta-carotene (25.1 +/- 1.7 mcg/dl) was in within normal range, but the incidence of insufficiency of vitamins E, C, B2, and carotenoid among athletes varied in the range of 0-25, 0-17, 15-67 and 42-75%, respectively. 95% of adults and 80% of younger athletes were sufficiently provided with vitamin E. Vitamin E level in blood serum of juniors involved in skeleton and biathlon was lower by 51 and 72% (p < 0.05), than this parameter in adult athletes. Vitamin A, C and B2, and beta-carotene blood serum level did not significantly differ in junior and adult athletes. Women were better supplied with vitamins C, B2, and beta-carotene: a reduced blood serum level of these micronutrients in women was detected 2-3 fold rare (p < 0.10) than among men. Blood serum concentration of vitamin C (1.20 +/- 0.05 mg/dl) and beta-carotene (32.0 +/- 3.9 mcg/dl) in women was greater by 15 and 54% (p < 0.05) than in men. In general, the biathletes were better provided with vitamins compared with other athletes. The vast majority (80%) were optimally provided by all three antioxidants (beta-carotene and vitamins E and C). In other sports, the relative quantity of athletes sufficiently supplied with these essential nutrients did not exceed 56%. The quota of supplied with all antioxidants among bullet shooters (31.1%) and

  3. Sufficient conditions for asymptotic stability and stabilization of autonomous fractional order systems

    NASA Astrophysics Data System (ADS)

    Lenka, Bichitra Kumar; Banerjee, Soumitro

    2018-03-01

    We discuss the asymptotic stability of autonomous linear and nonlinear fractional order systems where the state equations contain same or different fractional orders which lie between 0 and 2. First, we use the Laplace transform method to derive some sufficient conditions which ensure asymptotic stability of linear fractional order systems. Then by using the obtained results and linearization technique, a stability theorem is presented for autonomous nonlinear fractional order system. Finally, we design a control strategy for stabilization of autonomous nonlinear fractional order systems, and apply the results to the chaotic fractional order Lorenz system in order to verify its effectiveness.

  4. Role of sufficient phosphorus in biodiesel production from diatom Phaeodactylum tricornutum.

    PubMed

    Yu, Shi-Jin; Shen, Xiao-Fei; Ge, Huo-Qing; Zheng, Hang; Chu, Fei-Fei; Hu, Hao; Zeng, Raymond J

    2016-08-01

    In order to study the role of sufficient phosphorus (P) in biodiesel production by microalgae, Phaeodactylum tricornutum were cultivated in six different media treatments with combination of nitrogen (N) sufficiency/deprivation and phosphorus sufficiency/limitation/deprivation. Profiles of N and P, biomass, and fatty acids (FAs) content and compositions were measured during a 7-day cultivation period. The results showed that the FA content in microalgae biomass was promoted by P deprivation. However, statistical analysis showed that FA productivity had no significant difference (p = 0.63, >0.05) under the treatments of N deprivation with P sufficiency (N-P) and N deprivation with P deprivation (N-P-), indicating P sufficiency in N deprivation medium has little effect on increasing biodiesel productivity from P. triornutum. It was also found that the P absorption in N-P medium was 1.41 times higher than that in N sufficiency and P sufficiency (NP) medium. N deprivation with P limitation (N-P-l) was the optimal treatment for producing biodiesel from P. triornutum because of both the highest FA productivity and good biodiesel quality.

  5. Minimal sufficient positive-operator valued measure on a separable Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp

    We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.

  6. Speckle-interferometric measurement system of 3D deformation to obtain thickness changes of thin specimen under tensile loads

    NASA Astrophysics Data System (ADS)

    Kowarsch, Robert; Zhang, Jiajun; Sguazzo, Carmen; Hartmann, Stefan; Rembe, Christian

    2017-06-01

    The analysis of materials and geometries in tensile tests and the extraction of mechanic parameters is an important field in solid mechanics. Especially the measurement of thickness changes is important to obtain accurate strain information of specimens under tensile loads. Current optical measurement methods comprising 3D digital image correlation enable thickness-change measurement only with nm-resolution. We present a phase-shifting electronic speckle-pattern interferometer in combination with speckle-correlation technique to measure the 3D deformation. The phase-shift for the interferometer is introduced by fast wavelength tuning of a visible diode laser by injection current. In a post-processing step, both measurements can be combined to reconstruct the 3D deformation. In this contribution, results of a 3Ddeformation measurement for a polymer membrane are presented. These measurements show sufficient resolution for the detection of 3D deformations of thin specimen in tensile test. In future work we address the thickness changes of thin specimen under tensile loads.

  7. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  8. Accurate determinations of alpha(s) from realistic lattice QCD.

    PubMed

    Mason, Q; Trottier, H D; Davies, C T H; Foley, K; Gray, A; Lepage, G P; Nobes, M; Shigemitsu, J

    2005-07-29

    We obtain a new value for the QCD coupling constant by combining lattice QCD simulations with experimental data for hadron masses. Our lattice analysis is the first to (1) include vacuum polarization effects from all three light-quark flavors (using MILC configurations), (2) include third-order terms in perturbation theory, (3) systematically estimate fourth and higher-order terms, (4) use an unambiguous lattice spacing, and (5) use an [symbol: see text](a2)-accurate QCD action. We use 28 different (but related) short-distance quantities to obtain alpha((5)/(MS))(M(Z)) = 0.1170(12).

  9. Accurate registration of temporal CT images for pulmonary nodules detection

    NASA Astrophysics Data System (ADS)

    Yan, Jichao; Jiang, Luan; Li, Qiang

    2017-02-01

    Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.

  10. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  11. Accurate and robust brain image alignment using boundary-based registration.

    PubMed

    Greve, Douglas N; Fischl, Bruce

    2009-10-15

    The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.

  12. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  13. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  14. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Improving the Perception of Self-Sufficiency towards Creative Drama

    ERIC Educational Resources Information Center

    Pekdogan, Serpil; Korkmaz, Halil Ibrahim

    2016-01-01

    The purpose of this study is to investigate the effects of a Creative Drama Based Perception of Self-sufficiency Skills Training Program on 2nd grade bachelor degree students' (who are attending a preschool teacher training program) perception of self-sufficiency. This is a quasi-experimental study. Totally 50 students were equally divided into…

  16. Sufficient conditions for uniqueness of the weak value

    NASA Astrophysics Data System (ADS)

    Dressel, J.; Jordan, A. N.

    2012-01-01

    We review and clarify the sufficient conditions for uniquely defining the generalized weak value as the weak limit of a conditioned average using the contextual values formalism introduced in Dressel, Agarwal and Jordan (2010 Phys. Rev. Lett. 104 240401). We also respond to criticism of our work by Parrott (arXiv:1105.4188v1) concerning a proposed counter-example to the uniqueness of the definition of the generalized weak value. The counter-example does not satisfy our prescription in the case of an underspecified measurement context. We show that when the contextual values formalism is properly applied to this example, a natural interpretation of the measurement emerges and the unique definition in the weak limit holds. We also prove a theorem regarding the uniqueness of the definition under our sufficient conditions for the general case. Finally, a second proposed counter-example by Parrott (arXiv:1105.4188v6) is shown not to satisfy the sufficiency conditions for the provided theorem.

  17. Reassessing Rogers' necessary and sufficient conditions of change.

    PubMed

    Watson, Jeanne C

    2007-09-01

    This article reviews the impact of Carl Rogers' postulate about the necessary and sufficient conditions of therapeutic change on the field of psychotherapy. It is proposed that his article (see record 2007-14630-002) made an impact in two ways; first, by acting as a spur to researchers to identify the active ingredients of therapeutic change; and, second, by providing guidelines for therapeutic practice. The role of the necessary and sufficient conditions in process-experiential therapy, an emotion-focused therapy for individuals, and their limitations in terms of research and practice are discussed. It is proposed that although the conditions are necessary and important in promoting clients' affect regulation, they do not take sufficient account of other moderating variables that affect clients' response to treatment and may need to be balanced with more structured interventions. Notwithstanding, Rogers highlighted a way of interacting with clients that is generally acknowledged as essential to effective psychotherapy practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  18. Assessing sufficiency of thermal riverscapes for resilient ...

    EPA Pesticide Factsheets

    Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  19. Do we need 3D tube current modulation information for accurate organ dosimetry in chest CT? Protocols dose comparisons.

    PubMed

    Lopez-Rendon, Xochitl; Zhang, Guozhi; Coudyzer, Walter; Develter, Wim; Bosmans, Hilde; Zanca, Federica

    2017-11-01

    To compare the lung and breast dose associated with three chest protocols: standard, organ-based tube current modulation (OBTCM) and fast-speed scanning; and to estimate the error associated with organ dose when modelling the longitudinal (z-) TCM versus the 3D-TCM in Monte Carlo simulations (MC) for these three protocols. Five adult and three paediatric cadavers with different BMI were scanned. The CTDI vol of the OBTCM and the fast-speed protocols were matched to the patient-specific CTDI vol of the standard protocol. Lung and breast doses were estimated using MC with both z- and 3D-TCM simulated and compared between protocols. The fast-speed scanning protocol delivered the highest doses. A slight reduction for breast dose (up to 5.1%) was observed for two of the three female cadavers with the OBTCM in comparison to the standard. For both adult and paediatric, the implementation of the z-TCM data only for organ dose estimation resulted in 10.0% accuracy for the standard and fast-speed protocols, while relative dose differences were up to 15.3% for the OBTCM protocol. At identical CTDI vol values, the standard protocol delivered the lowest overall doses. Only for the OBTCM protocol is the 3D-TCM needed if an accurate (<10.0%) organ dosimetry is desired. • The z-TCM information is sufficient for accurate dosimetry for standard protocols. • The z-TCM information is sufficient for accurate dosimetry for fast-speed scanning protocols. • For organ-based TCM schemes, the 3D-TCM information is necessary for accurate dosimetry. • At identical CTDI vol , the fast-speed scanning protocol delivered the highest doses. • Lung dose was higher in XCare than standard protocol at identical CTDI vol .

  20. A new evaluation tool to obtain practice-based evidence of worksite health promotion programs.

    PubMed

    Dunet, Diane O; Sparling, Phillip B; Hersey, James; Williams-Piehota, Pamela; Hill, Mary D; Hanssen, Carl; Lawrenz, Frances; Reyes, Michele

    2008-10-01

    The Centers for Disease Control and Prevention developed the Swift Worksite Assessment and Translation (SWAT) evaluation method to identify promising practices in worksite health promotion programs. The new method complements research studies and evaluation studies of evidence-based practices that promote healthy weight in working adults. We used nationally recognized program evaluation standards of utility, feasibility, accuracy, and propriety as the foundation for our 5-step method: 1) site identification and selection, 2) site visit, 3) post-visit evaluation of promising practices, 4) evaluation capacity building, and 5) translation and dissemination. An independent, outside evaluation team conducted process and summative evaluations of SWAT to determine its efficacy in providing accurate, useful information and its compliance with evaluation standards. The SWAT evaluation approach is feasible in small and medium-sized workplace settings. The independent evaluation team judged SWAT favorably as an evaluation method, noting among its strengths its systematic and detailed procedures and service orientation. Experts in worksite health promotion evaluation concluded that the data obtained by using this evaluation method were sufficient to allow them to make judgments about promising practices. SWAT is a useful, business-friendly approach to systematic, yet rapid, evaluation that comports with program evaluation standards. The method provides a new tool to obtain practice-based evidence of worksite health promotion programs that help prevent obesity and, more broadly, may advance public health goals for chronic disease prevention and health promotion.

  1. Quantitative Phase Microscopy for Accurate Characterization of Microlens Arrays

    NASA Astrophysics Data System (ADS)

    Grilli, Simonetta; Miccio, Lisa; Merola, Francesco; Finizio, Andrea; Paturzo, Melania; Coppola, Sara; Vespini, Veronica; Ferraro, Pietro

    Microlens arrays are of fundamental importance in a wide variety of applications in optics and photonics. This chapter deals with an accurate digital holography-based characterization of both liquid and polymeric microlenses fabricated by an innovative pyro-electrowetting process. The actuation of liquid and polymeric films is obtained through the use of pyroelectric charges generated into polar dielectric lithium niobate crystals.

  2. Behavioral Interventions to Advance Self-Sufficiency

    ERIC Educational Resources Information Center

    MDRC, 2017

    2017-01-01

    As the first major effort to use a behavioral economics lens to examine human services programs that serve poor and vulnerable families in the United States, the Behavioral Interventions to Advance Self-Sufficiency (BIAS) project demonstrated the value of applying behavioral insights to improve the efficacy of human services programs. The BIAS…

  3. Standardization of a fluconazole bioassay and correlation of results with those obtained by high-pressure liquid chromatography.

    PubMed Central

    Rex, J H; Hanson, L H; Amantea, M A; Stevens, D A; Bennett, J E

    1991-01-01

    An improved bioassay for fluconazole was developed. This assay is sensitive in the clinically relevant range (2 to 40 micrograms/ml) and analyzes plasma, serum, and cerebrospinal fluid specimens; bioassay results correlate with results obtained by high-pressure liquid chromatography (HPLC). Bioassay and HPLC analyses of spiked plasma, serum, and cerebrospinal fluid samples (run as unknowns) gave good agreement with expected values. Analysis of specimens from patients gave equivalent results by both HPLC and bioassay. HPLC had a lower within-run coefficient of variation (less than 2.5% for HPLC versus less than 11% for bioassay) and a lower between-run coefficient of variation (less than 5% versus less than 12% for bioassay) and was more sensitive (lower limit of detection, 0.1 micrograms/ml [versus 2 micrograms/ml for bioassay]). The bioassay is, however, sufficiently accurate and sensitive for clinical specimens, and its relative simplicity, low sample volume requirement, and low equipment cost should make it the technique of choice for analysis of routine clinical specimens. PMID:1854166

  4. Food Self-Sufficiency across scales: How local can we go?

    NASA Astrophysics Data System (ADS)

    Pradhan, Prajal; Lüdeke, Matthias K. B.; Reusser, Dominik E.; Kropp, Jürgen P.

    2013-04-01

    "Think global, act local" is a phrase often used in sustainability debates. Here, we explore the potential of regions to go for local supply in context of sustainable food consumption considering both the present state and the plausible future scenarios. We analyze data on the gridded crop calories production, the gridded livestock calories production, the gridded feed calories use and the gridded food calories consumption in 5' resolution. We derived these gridded data from various sources: Global Agro-ecological Zone (GAEZ v3.0), Gridded Livestock of the World (GLW), FAOSTAT, and Global Rural-Urban Mapping Project (GRUMP). For scenarios analysis, we considered changes in population, dietary patterns and possibility of obtaining the maximum potential yield. We investigate the food self-sufficiency multiple spatial scales. We start from the 5' resolution (i.e. around 10 km x 10 km in the equator) and look at 8 levels of aggregation ranging from the plausible lowest administrative level to the continental level. Results for the different spatial scales show that about 1.9 billion people live in the area of 5' resolution where enough calories can be produced to sustain their food consumption and the feed used. On the country level, about 4.4 billion population can be sustained without international food trade. For about 1 billion population from Asia and Africa, there is a need for cross-continental food trade. However, if we were able to achieve the maximum potential crop yield, about 2.6 billion population can be sustained within their living area of 5' resolution. Furthermore, Africa and Asia could be food self-sufficient by achieving their maximum potential crop yield and only round 630 million populations would be dependent on the international food trade. However, the food self-sufficiency status might differ under consideration of the future change in population, dietary patterns and climatic conditions. We provide an initial approach for investigating the

  5. Food self-sufficiency across scales: how local can we go?

    PubMed

    Pradhan, Prajal; Lüdeke, Matthias K B; Reusser, Dominik E; Kropp, Juergen P

    2014-08-19

    This study explores the potential for regions to shift to a local food supply using food self-sufficiency (FSS) as an indicator. We considered a region food self-sufficient when its total calorie production is enough to meet its demand. For future scenarios, we considered population growth, dietary changes, improved feed conversion efficiency, climate change, and crop yield increments. Starting at the 5' resolution, we investigated FSS from the lowest administrative levels to continents. Globally, about 1.9 billion people are self-sufficient within their 5' grid, while about 1 billion people from Asia and Africa require cross-continental agricultural trade in 2000. By closing yield gaps, these regions can achieve FSS, which also reduces international trade and increases a self-sufficient population in a 5' grid to 2.9 billion. The number of people depending on international trade will vary between 1.5 and 6 billion by 2050. Climate change may increase the need for international agricultural trade by 4% to 16%.

  6. Accurate beacon positioning method for satellite-to-ground optical communication.

    PubMed

    Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing

    2017-12-11

    In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.

  7. 33 CFR 115.30 - Sufficiency of State authority for bridges.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for bridges. 115.30 Section 115.30 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES BRIDGE LOCATIONS AND CLEARANCES; ADMINISTRATIVE PROCEDURES § 115.30 Sufficiency of State authority for bridges. An opinion of the attorney general of the State as to the sufficiency of State...

  8. 33 CFR 115.30 - Sufficiency of State authority for bridges.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for bridges. 115.30 Section 115.30 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES BRIDGE LOCATIONS AND CLEARANCES; ADMINISTRATIVE PROCEDURES § 115.30 Sufficiency of State authority for bridges. An opinion of the attorney general of the State as to the sufficiency of State...

  9. Accurate analytical periodic solution of the elliptical Kepler equation using the Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Alshaery, Aisha; Ebaid, Abdelhalim

    2017-11-01

    Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.

  10. 20 CFR 404.1617 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  11. 20 CFR 416.1017 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  12. 20 CFR 404.1617 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  13. 20 CFR 416.1017 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  14. 20 CFR 416.1017 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  15. 20 CFR 404.1617 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  16. 20 CFR 416.1017 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  17. 20 CFR 404.1617 - Reasonable efforts to obtain review by a qualified psychiatrist or psychologist.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...)). Where it does not have sufficient resources to make the necessary reviews, the State agency must attempt to obtain the resources needed. If the State agency is unable to obtain additional psychiatrists and psychologists because of low salary rates or fee schedules it should attempt to raise the State agency's levels...

  18. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  19. Quantification is Neither Necessary Nor Sufficient for Measurement

    NASA Astrophysics Data System (ADS)

    Mari, Luca; Maul, Andrew; Torres Irribarra, David; Wilson, Mark

    2013-09-01

    Being an infrastructural, widespread activity, measurement is laden with stereotypes. Some of these concern the role of measurement in the relation between quality and quantity. In particular, it is sometimes argued or assumed that quantification is necessary for measurement; it is also sometimes argued or assumed that quantification is sufficient for or synonymous with measurement. To assess the validity of these positions the concepts of measurement and quantitative evaluation should be independently defined and their relationship analyzed. We contend that the defining characteristic of measurement should be the structure of the process, not a feature of its results. Under this perspective, quantitative evaluation is neither sufficient nor necessary for measurement.

  20. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  1. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  2. Pelvic orientation for total hip arthroplasty in lateral decubitus: can it be accurately measured?

    PubMed

    Sykes, Alice M; Hill, Janet C; Orr, John F; Gill, Harinderjit S; Salazar, Jose J; Humphreys, Lee D; Beverland, David E

    2016-05-16

    During total hip arthroplasty (THA), accurately predicting acetabular cup orientation remains a key challenge, in great part because of uncertainty about pelvic orientation. This pilot study aimed to develop and validate a technique to measure pelvic orientation; establish its accuracy in the location of anatomical landmarks and subsequently; investigate if limb movement during a simulated surgical procedure alters pelvic orientation. The developed technique measured 3-D orientation of an isolated Sawbone pelvis, it was then implemented to measure pelvic orientation in lateral decubitus with post-THA patients (n = 20) using a motion capture system. Orientation of the isolated Sawbone pelvis was accurately measured, demonstrated by high correlations with angular data from a coordinate measurement machine; R-squared values close to 1 for all pelvic axes. When applied to volunteer subjects, largest movements occurred about the longitudinal pelvic axis; internal and external pelvic rotation. Rotations about the anteroposterior axis, which directly affect inclination angles, showed >75% of participants had movement within ±5° of neutral, 0°. The technique accurately measured orientation of the isolated bony pelvis. This was not the case in a simulated theatre environment. Soft tissue landmarks were difficult to palpate repeatedly. These findings have direct clinical relevance, landmark registration in lateral decubitus is a potential source of error, contributing here to large ranges in measured movement. Surgeons must be aware that present techniques using bony landmarks to reference pelvic orientation for cup implantation, both computer-based and mechanical, may not be sufficiently accurate.

  3. The dynamic simulation model of soybean in Central Java to support food self sufficiency: A supply chain perspective

    NASA Astrophysics Data System (ADS)

    Oktyajati, Nancy; Hisjam, Muh.; Sutopo, Wahyudi

    2018-02-01

    Consider food become one of the basic human needs in order to survive so food sufficiency become very important. Food sufficiency of soybean commodity in Central Java still depends on imported soybean. Insufficiency of soybean because of there is much gap between local soybean productions and its demand. In the year 2016 the shortage of supply soybean commodity as much 68.79%. Soybean is an important and strategic commodity after rice and corn. The increasing consumption of soybean is related to increasing population, increasing incomes, changing of healthy life style. The aims of this study are to determine the soybean dynamic model based on supply chain perspective, define the proper price of local soybean to trigger increasing of local production, and to define the alternative solution to support food self sufficiency. This study will capture the real condition into dynamics model, then simulate a series of scenario into a computer program to obtain the best results. This study will be conducted the following first scenario with government intervention policy and second without government intervention policy. The best solution of the alternative can be used as government consideration for governmental policy. The results of the propose scenarios showed that self sufficiency on soybean can be achieved after the next 20 years by increasing planting area 4% and land productivity 1% per year.

  4. A combined system of microbial fuel cell and intermittently aerated biological filter for energy self-sufficient wastewater treatment.

    PubMed

    Dong, Yue; Feng, Yujie; Qu, Youpeng; Du, Yue; Zhou, Xiangtong; Liu, Jia

    2015-12-15

    Energy self-sufficiency is a highly desirable goal of sustainable wastewater treatment. Herein, a combined system of a microbial fuel cell and an intermittently aerated biological filter (MFC-IABF) was designed and operated in an energy self-sufficient manner. The system was fed with synthetic wastewater (COD = 1000 mg L(-1)) in continuous mode for more than 3 months at room temperature (~25 °C). Voltage output was increased to 5 ± 0.4 V using a capacitor-based circuit. The MFC produced electricity to power the pumping and aeration systems in IABF, concomitantly removing COD. The IABF operating under an intermittent aeration mode (aeration rate 1000 ± 80 mL h(-1)) removed the residual nutrients and improved the water quality at HRT = 7.2 h. This two-stage combined system obtained 93.9% SCOD removal and 91.7% TCOD removal (effluent SCOD = 61 mg L(-1), TCOD = 82.8 mg L(-1)). Energy analysis indicated that the MFC unit produced sufficient energy (0.27 kWh m(-3)) to support the pumping system (0.014 kWh m(-3)) and aeration system (0.22 kWh m(-3)). These results demonstrated that the combined MFC-IABF system could be operated in an energy self-sufficient manner, resulting to high-quality effluent.

  5. A combined system of microbial fuel cell and intermittently aerated biological filter for energy self-sufficient wastewater treatment

    PubMed Central

    Dong, Yue; Feng, Yujie; Qu, Youpeng; Du, Yue; Zhou, Xiangtong; Liu, Jia

    2015-01-01

    Energy self-sufficiency is a highly desirable goal of sustainable wastewater treatment. Herein, a combined system of a microbial fuel cell and an intermittently aerated biological filter (MFC-IABF) was designed and operated in an energy self-sufficient manner. The system was fed with synthetic wastewater (COD = 1000 mg L−1) in continuous mode for more than 3 months at room temperature (~25 °C). Voltage output was increased to 5 ± 0.4 V using a capacitor-based circuit. The MFC produced electricity to power the pumping and aeration systems in IABF, concomitantly removing COD. The IABF operating under an intermittent aeration mode (aeration rate 1000 ± 80 mL h−1) removed the residual nutrients and improved the water quality at HRT = 7.2 h. This two-stage combined system obtained 93.9% SCOD removal and 91.7% TCOD removal (effluent SCOD = 61 mg L−1, TCOD = 82.8 mg L−1). Energy analysis indicated that the MFC unit produced sufficient energy (0.27 kWh m−3) to support the pumping system (0.014 kWh m−3) and aeration system (0.22 kWh m−3). These results demonstrated that the combined MFC-IABF system could be operated in an energy self-sufficient manner, resulting to high-quality effluent. PMID:26666392

  6. The effect of changes to question order on the prevalence of 'sufficient' physical activity in an Australian population survey.

    PubMed

    Hanley, Christine; Duncan, Mitch J; Mummery, W Kerry

    2013-03-01

    Population surveys are frequently used to assess prevalence, correlates and health benefits of physical activity. However, nonsampling errors, such as question order effects, in surveys may lead to imprecision in self reported physical activity. This study examined the impact of modified question order in a commonly used physical activity questionnaire on the prevalence of sufficient physical activity. Data were obtained from a telephone survey of adults living in Queensland, Australia. A total of 1243 adults participated in the computer-assisted telephone interview (CATI) survey conducted in July 2008 which included the Active Australia Questionnaire (AAQ) presented in traditional or modified order. Binary logistic regression analyses was used to examine relationships between question order and physical activity outcomes. Significant relationships were found between question order and sufficient activity, recreational walking, moderate activity, vigorous activity, and total activity. Respondents who received the AAQ in modified order were more likely to be categorized as sufficiently active (OR = 1.28, 95% CI 1.01-1.60). This study highlights the importance of question order on estimates of self reported physical activity. This study has shown that changes in question order can lead to an increase in the proportion of participants classified as sufficiently active.

  7. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  8. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  9. Accurate inclusion mass screening: a bridge from unbiased discovery to targeted assay development for biomarker verification.

    PubMed

    Jaffe, Jacob D; Keshishian, Hasmik; Chang, Betty; Addona, Theresa A; Gillette, Michael A; Carr, Steven A

    2008-10-01

    Verification of candidate biomarker proteins in blood is typically done using multiple reaction monitoring (MRM) of peptides by LC-MS/MS on triple quadrupole MS systems. MRM assay development for each protein requires significant time and cost, much of which is likely to be of little value if the candidate biomarker is below the detection limit in blood or a false positive in the original discovery data. Here we present a new technology, accurate inclusion mass screening (AIMS), designed to provide a bridge from unbiased discovery to MS-based targeted assay development. Masses on the software inclusion list are monitored in each scan on the Orbitrap MS system, and MS/MS spectra for sequence confirmation are acquired only when a peptide from the list is detected with both the correct accurate mass and charge state. The AIMS experiment confirms that a given peptide (and thus the protein from which it is derived) is present in the plasma. Throughput of the method is sufficient to qualify up to a hundred proteins/week. The sensitivity of AIMS is similar to MRM on a triple quadrupole MS system using optimized sample preparation methods (low tens of ng/ml in plasma), and MS/MS data from the AIMS experiments on the Orbitrap can be directly used to configure MRM assays. The method was shown to be at least 4-fold more efficient at detecting peptides of interest than undirected LC-MS/MS experiments using the same instrumentation, and relative quantitation information can be obtained by AIMS in case versus control experiments. Detection by AIMS ensures that a quantitative MRM-based assay can be configured for that protein. The method has the potential to qualify large number of biomarker candidates based on their detection in plasma prior to committing to the time- and resource-intensive steps of establishing a quantitative assay.

  10. Energy self-sufficient sewage wastewater treatment plants: is optimized anaerobic sludge digestion the key?

    PubMed

    Jenicek, P; Kutil, J; Benes, O; Todt, V; Zabranska, J; Dohanyos, M

    2013-01-01

    The anaerobic digestion of primary and waste activated sludge generates biogas that can be converted into energy to power the operation of a sewage wastewater treatment plant (WWTP). But can the biogas generated by anaerobic sludge digestion ever completely satisfy the electricity requirements of a WWTP with 'standard' energy consumption (i.e. industrial pollution not treated, no external organic substrate added)? With this question in mind, we optimized biogas production at Prague's Central Wastewater Treatment Plant in the following ways: enhanced primary sludge separation; thickened waste activated sludge; implemented a lysate centrifuge; increased operational temperature; improved digester mixing. With these optimizations, biogas production increased significantly to 12.5 m(3) per population equivalent per year. In turn, this led to an equally significant increase in specific energy production from approximately 15 to 23.5 kWh per population equivalent per year. We compared these full-scale results with those obtained from WWTPs that are already energy self-sufficient, but have exceptionally low energy consumption. Both our results and our analysis suggest that, with the correct optimization of anaerobic digestion technology, even WWTPs with 'standard' energy consumption can either attain or come close to attaining energy self-sufficiency.

  11. Nature exposure sufficiency and insufficiency: The benefits of environmental preservation.

    PubMed

    Reddon, John R; Durante, Salvatore B

    2018-01-01

    Increasing industrialization, urbanization, and a failure of many world leaders to appreciate the consequences of climate change are deleteriously impacting quality of life as well as diminishing the prospects for long term survival. Economic competitiveness and corporate profitability often pre-empt environmental concerns. The calving of an iceberg in Antarctica and the hurricane activity in the Caribbean during 2017 are unfortunate illustrations of the continuing escalation of environmental issues. We provide historical and current evidence for the importance of Nature Exposure (NE) and introduce the continuum Nature Exposure Sufficiency (NES) and Insufficiency (NEI). Insufficiency includes impoverished environments (e.g., slums and prisons) where nature exposure is very limited. Nature Exposure Sufficiency (NES) is an optimal amount of exposure to nature where many benefits such as reinvigoration can be obtained by everyone. NES also has several benefits for individuals with various health conditions such as arthritis, dementia, or depression. The benefits of NE are not just derivable from parks, forests, and other natural settings. Interiors of buildings and homes can be enhanced with plants and even pictures or objects from nature. Additionally, there is abundant evidence indicating that virtual and artificial environments depicting nature can provide substantial NE and therefore contribute to general wellbeing. Besides the difficulty in achieving cooperation amongst nations, corporations, and other collectives in developing and implementing long range plans to deal with climate change, there is also sometimes an aversion at the individual level whereby people are unwilling to experience nature due to insects and other discomforts. Such individuals are often averse to supplanting the comforts of home, even temporarily, with inadequate facilities that are seemingly less pleasant than their typical dwellings. We propose using the term Nature Exposure Aversion

  12. Accurate Encoding and Decoding by Single Cells: Amplitude Versus Frequency Modulation

    PubMed Central

    Micali, Gabriele; Aquino, Gerardo; Richards, David M.; Endres, Robert G.

    2015-01-01

    Cells sense external concentrations and, via biochemical signaling, respond by regulating the expression of target proteins. Both in signaling networks and gene regulation there are two main mechanisms by which the concentration can be encoded internally: amplitude modulation (AM), where the absolute concentration of an internal signaling molecule encodes the stimulus, and frequency modulation (FM), where the period between successive bursts represents the stimulus. Although both mechanisms have been observed in biological systems, the question of when it is beneficial for cells to use either AM or FM is largely unanswered. Here, we first consider a simple model for a single receptor (or ion channel), which can either signal continuously whenever a ligand is bound, or produce a burst in signaling molecule upon receptor binding. We find that bursty signaling is more accurate than continuous signaling only for sufficiently fast dynamics. This suggests that modulation based on bursts may be more common in signaling networks than in gene regulation. We then extend our model to multiple receptors, where continuous and bursty signaling are equivalent to AM and FM respectively, finding that AM is always more accurate. This implies that the reason some cells use FM is related to factors other than accuracy, such as the ability to coordinate expression of multiple genes or to implement threshold crossing mechanisms. PMID:26030820

  13. Predictive sufficiency and the use of stored internal state

    NASA Technical Reports Server (NTRS)

    Musliner, David J.; Durfee, Edmund H.; Shin, Kang G.

    1994-01-01

    In all embedded computing systems, some delay exists between sensing and acting. By choosing an action based on sensed data, a system is essentially predicting that there will be no significant changes in the world during this delay. However, the dynamic and uncertain nature of the real world can make these predictions incorrect, and thus, a system may execute inappropriate actions. Making systems more reactive by decreasing the gap between sensing and action leaves less time for predictions to err, but still provides no principled assurance that they will be correct. Using the concept of predictive sufficiency described in this paper, a system can prove that its predictions are valid, and that it will never execute inappropriate actions. In the context of our CIRCA system, we also show how predictive sufficiency allows a system to guarantee worst-case response times to changes in its environment. Using predictive sufficiency, CIRCA is able to build real-time reactive control plans which provide a sound basis for performance guarantees that are unavailable with other reactive systems.

  14. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  15. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  16. Dorsal Hippocampal CREB Is Both Necessary and Sufficient for Spatial Memory

    ERIC Educational Resources Information Center

    Sekeres, Melanie J.; Neve, Rachael L.; Frankland, Paul W.; Josselyn, Sheena A.

    2010-01-01

    Although the transcription factor CREB has been widely implicated in memory, whether it is sufficient to produce spatial memory under conditions that do not normally support memory formation in mammals is unknown. We found that locally and acutely increasing CREB levels in the dorsal hippocampus using viral vectors is sufficient to induce robust…

  17. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  18. A review of the liquid metal diffusion data obtained from the space shuttle endeavour mission STS-47 and the space shuttle columbia mission STS-52

    NASA Astrophysics Data System (ADS)

    Shirkhanzadeh, Morteza

    Accurate data of liquid-phase solute diffusion coefficients are required to validate the condensed -matter physics theories. However, the required data accuracy to discriminate between com-peting theoretical models is 1 to 2 percent(1). Smith and Scott (2) have recently used the measured values of diffusion coefficients for Pb-Au in microgravity to validate the theoretical values of the diffusion coefficients derived from molecular dynamics simulations and several Enskog hard sphere models. The microgravity data used was obtained from the liquid diffusion experiments conducted on board the Space Shuttle Endeavour (mission STS-47) and the Space Shuttle Columbia (mission STS-52). Based on the analysis of the results, it was claimed that the measured values of diffusion coefficients were consistent with the theoretical results and that the data fit a linear relationship with a slope slightly greater than predicted by the molecular dynamics simulations. These conclusions, however, contradict the claims made in previous publications (3-5) where it was reported that the microgravity data obtained from the shuttle experiments fit the fluctuation theory (D proportional to T2). A thorough analysis of data will be presented to demonstrate that the widely-reported micro-gravity results obtained from shuttle experiments are not reliable and sufficiantly accurate to discriminate between competing theoretical models. References: 1. J.P. Garandet, G. Mathiak, V. Botton, P. Lehmann and A. Griesche, Int. J. Thermophysics, 25, 249 (2004). 2.P.J. Scott and R.W. Smith, J. Appl. Physics 104, 043706 (2008). 3. R.W. Smith, Microgravity Sci. Technol. XI (2) 78-84 (1998). 4.Smith et al, Ann. N.Y. Acad. Sci. 974:56-67 (2002) (retracted). 5.R.A. Herring et al, J. Jpn. Soc. Microgravity Appl., Vol.16, 234-244 (1999).

  19. Discrete sensors distribution for accurate plantar pressure analyses.

    PubMed

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  1. Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination.

    PubMed

    Kitamura, Yutaka; Watanabe, Taisuke; Nakamura, Masayuki; Isobe, Kazushige; Kawabata, Hideo; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Tanaka, Takaaki; Kawase, Tomoyuki

    2018-01-01

    Platelet-rich fibrin (PRF) clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP) fractions were clotted with CaCl 2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix.

  2. Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination

    PubMed Central

    Kitamura, Yutaka; Watanabe, Taisuke; Nakamura, Masayuki; Isobe, Kazushige; Kawabata, Hideo; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Tanaka, Takaaki; Kawase, Tomoyuki

    2018-01-01

    Platelet-rich fibrin (PRF) clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP) fractions were clotted with CaCl2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix. PMID:29450197

  3. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  4. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  5. Necessary and sufficient conditions for the stability of a sleeping top described by three forms of dynamic equations

    NASA Astrophysics Data System (ADS)

    Ge, Zheng-Ming

    2008-04-01

    Necessary and sufficient conditions for the stability of a sleeping top described by dynamic equations of six state variables, Euler equations, and Poisson equations, by a two-degree-of-freedom system, Krylov equations, and by a one-degree-of-freedom system, nutation angle equation, is obtained by the Lyapunov direct method, Ge-Liu second instability theorem, an instability theorem, and a Ge-Yao-Chen partial region stability theorem without using the first approximation theory altogether.

  6. Calculating accurate aboveground dry weight biomass of herbaceous vegetation in the Great Plains: A comparison of three calculations to determine the least resource intensive and most accurate method

    Treesearch

    Ben Butler

    2007-01-01

    Obtaining accurate biomass measurements is often a resource-intensive task. Data collection crews often spend large amounts of time in the field clipping, drying, and weighing grasses to calculate the biomass of a given vegetation type. Such a problem is currently occurring in the Great Plains region of the Bureau of Indian Affairs. A study looked at six reservations...

  7. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  8. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  9. Accurate 3D kinematic measurement of temporomandibular joint using X-ray fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takaharu; Matsumoto, Akiko; Sugamoto, Kazuomi; Matsumoto, Ken; Kakimoto, Naoya; Yura, Yoshiaki

    2014-04-01

    Accurate measurement and analysis of 3D kinematics of temporomandibular joint (TMJ) is very important for assisting clinical diagnosis and treatment of prosthodontics and orthodontics, and oral surgery. This study presents a new 3D kinematic measurement technique of the TMJ using X-ray fluoroscopic images, which can easily obtain the TMJ kinematic data in natural motion. In vivo kinematics of the TMJ (maxilla and mandibular bone) is determined using a feature-based 2D/3D registration, which uses beads silhouette on fluoroscopic images and 3D surface bone models with beads. The 3D surface models of maxilla and mandibular bone with beads were created from CT scans data of the subject using the mouthpiece with the seven strategically placed beads. In order to validate the accuracy of pose estimation for the maxilla and mandibular bone, computer simulation test was performed using five patterns of synthetic tantalum beads silhouette images. In the clinical applications, dynamic movement during jaw opening and closing was conducted, and the relative pose of the mandibular bone with respect to the maxilla bone was determined. The results of computer simulation test showed that the root mean square errors were sufficiently smaller than 1.0 mm and 1.0 degree. In the results of clinical application, during jaw opening from 0.0 to 36.8 degree of rotation, mandibular condyle exhibited 19.8 mm of anterior sliding relative to maxillary articular fossa, and these measurement values were clinically similar to the previous reports. Consequently, present technique was thought to be suitable for the 3D TMJ kinematic analysis.

  10. Do sufficient vitamin D levels at the end of summer in children and adolescents provide an assurance of vitamin D sufficiency at the end of winter? A cohort study.

    PubMed

    Shakeri, Habibesadat; Pournaghi, Seyed-Javad; Hashemi, Javad; Mohammad-Zadeh, Mohammad; Akaberi, Arash

    2017-10-26

    The changes in serum 25-hydroxyvitamin D (25(OH)D) in adolescents from summer to winter and optimal serum vitamin D levels in the summer to ensure adequate vitamin D levels at the end of winter are currently unknown. This study was conducted to address this knowledge gap. The study was conducted as a cohort study. Sixty-eight participants aged 7-18 years and who had sufficient vitamin D levels at the end of the summer in 2011 were selected using stratified random sampling. Subsequently, the participants' vitamin D levels were measured at the end of the winter in 2012. A receiver operating characteristic (ROC) curve was used to determine optimal cutoff points for vitamin D at the end of the summer to predict sufficient vitamin D levels at the end of the winter. The results indicated that 89.7% of all the participants had a decrease in vitamin D levels from summer to winter: 14.7% of them were vitamin D-deficient, 36.8% had insufficient vitamin D concentrations and only 48.5% where able to maintain sufficient vitamin D. The optimal cutoff point to provide assurance of sufficient serum vitamin D at the end of the winter was 40 ng/mL at the end of the summer. Sex, age and vitamin D levels at the end of the summer were significant predictors of non-sufficient vitamin D at the end of the winter. In this age group, a dramatic reduction in vitamin D was observed over the follow-up period. Sufficient vitamin D at the end of the summer did not guarantee vitamin D sufficiency at the end of the winter. We found 40 ng/mL as an optimal cutoff point.

  11. Kinetic determinations of accurate relative oxidation potentials of amines with reactive radical cations.

    PubMed

    Gould, Ian R; Wosinska, Zofia M; Farid, Samir

    2006-01-01

    Accurate oxidation potentials for organic compounds are critical for the evaluation of thermodynamic and kinetic properties of their radical cations. Except when using a specialized apparatus, electrochemical oxidation of molecules with reactive radical cations is usually an irreversible process, providing peak potentials, E(p), rather than thermodynamically meaningful oxidation potentials, E(ox). In a previous study on amines with radical cations that underwent rapid decarboxylation, we estimated E(ox) by correcting the E(p) from cyclic voltammetry with rate constants for decarboxylation obtained using laser flash photolysis. Here we use redox equilibration experiments to determine accurate relative oxidation potentials for the same amines. We also describe an extension of these experiments to show how relative oxidation potentials can be obtained in the absence of equilibrium, from a complete kinetic analysis of the reversible redox kinetics. The results provide support for the previous cyclic voltammetry/laser flash photolysis method for determining oxidation potentials.

  12. A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics

    PubMed Central

    Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2010-01-01

    Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330

  13. Sufficient Dimension Reduction for Longitudinally Measured Predictors

    PubMed Central

    Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia

    2013-01-01

    We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635

  14. Role of sufficient statistics in stochastic thermodynamics and its implication to sensory adaptation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takumi; Sagawa, Takahiro

    2018-04-01

    A sufficient statistic is a significant concept in statistics, which means a probability variable that has sufficient information required for an inference task. We investigate the roles of sufficient statistics and related quantities in stochastic thermodynamics. Specifically, we prove that for general continuous-time bipartite networks, the existence of a sufficient statistic implies that an informational quantity called the sensory capacity takes the maximum. Since the maximal sensory capacity imposes a constraint that the energetic efficiency cannot exceed one-half, our result implies that the existence of a sufficient statistic is inevitably accompanied by energetic dissipation. We also show that, in a particular parameter region of linear Langevin systems there exists the optimal noise intensity at which the sensory capacity, the information-thermodynamic efficiency, and the total entropy production are optimized at the same time. We apply our general result to a model of sensory adaptation of E. coli and find that the sensory capacity is nearly maximal with experimentally realistic parameters.

  15. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6

  16. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to

  17. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Adamo, Giuseppe, E-mail: giuseppe.dadamo@sissa.it; Pelissetto, Andrea, E-mail: andrea.pelissetto@roma1.infn.it; Pierleoni, Carlo, E-mail: carlo.pierleoni@aquila.infn.it

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmannmore » inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.« less

  18. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  19. Accurate collision-induced line-coupling parameters for the fundamental band of CO in He - Close coupling and coupled states scattering calculations

    NASA Technical Reports Server (NTRS)

    Green, Sheldon; Boissoles, J.; Boulet, C.

    1988-01-01

    The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.

  20. Accurate determination of the charge transfer efficiency of photoanodes for solar water splitting.

    PubMed

    Klotz, Dino; Grave, Daniel A; Rothschild, Avner

    2017-08-09

    The oxygen evolution reaction (OER) at the surface of semiconductor photoanodes is critical for photoelectrochemical water splitting. This reaction involves photo-generated holes that oxidize water via charge transfer at the photoanode/electrolyte interface. However, a certain fraction of the holes that reach the surface recombine with electrons from the conduction band, giving rise to the surface recombination loss. The charge transfer efficiency, η t , defined as the ratio between the flux of holes that contribute to the water oxidation reaction and the total flux of holes that reach the surface, is an important parameter that helps to distinguish between bulk and surface recombination losses. However, accurate determination of η t by conventional voltammetry measurements is complicated because only the total current is measured and it is difficult to discern between different contributions to the current. Chopped light measurement (CLM) and hole scavenger measurement (HSM) techniques are widely employed to determine η t , but they often lead to errors resulting from instrumental as well as fundamental limitations. Intensity modulated photocurrent spectroscopy (IMPS) is better suited for accurate determination of η t because it provides direct information on both the total photocurrent and the surface recombination current. However, careful analysis of IMPS measurements at different light intensities is required to account for nonlinear effects. This work compares the η t values obtained by these methods using heteroepitaxial thin-film hematite photoanodes as a case study. We show that a wide spread of η t values is obtained by different analysis methods, and even within the same method different values may be obtained depending on instrumental and experimental conditions such as the light source and light intensity. Statistical analysis of the results obtained for our model hematite photoanode show good correlation between different methods for

  1. Energy self-sufficiency in Northampton, Massachusetts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The study is not an engineering analysis but begins the process of exploring the potential for conservation and local renewable-resource development in a specific community, Northampton, Massachusetts, with the social, institutional, and environmental factors in that community taken into account. Section I is an extensive executive summary of the full study, and Section II is a detailed examination of the potential for increased local energy self-sufficiency in Northampton, including current and future demand estimates, the possible role of conservation and renewable resources, and a discussion of the economic and social implications of alternative energy systems. (MOW)

  2. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  3. Intellectual Freedom and Economic Sufficiency as Educational Entitlements.

    ERIC Educational Resources Information Center

    Morse, Jane Fowler

    2001-01-01

    Using the theories of John Stuart Mill and Karl Marx, this article supports the educational entitlements of intellectual freedom and economic sufficiency. Explores these issues in reference to their implications for teaching, the teaching profession and its training. Concludes that ideas cannot be controlled by the interests of the dominant class.…

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  7. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  9. A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.

    PubMed

    Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em

    2010-05-19

    Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  11. Vaccine procurement and self-sufficiency in developing countries.

    PubMed

    Woodle, D

    2000-06-01

    This paper discusses the movement toward self-sufficiency in vaccine supply in developing countries (and countries in transition to new economic and political systems) and explains special supply concerns about vaccine as a product class. It traces some history of donor support and programmes aimed at self-financing, then continues with a discussion about self-sufficiency in terms of institutional capacity building. A number of deficiencies commonly found in vaccine procurement and supply in low- and middle-income countries are characterized, and institutional strengthening with procurement technical assistance is described. The paper also provides information about a vaccine procurement manual being developed by the United States Agency for International Development (USAID) and the World Health Organization (WHO) for use in this environment. Two brief case studies are included to illustrate the spectrum of existing capabilities and different approaches to technical assistance aimed at developing or improving vaccine procurement capability. In conclusion, the paper discusses the special nature of vaccine and issues surrounding potential integration and decentralization of vaccine supply systems as part of health sector reform.

  12. Information trimming: Sufficient statistics, mutual information, and predictability from effective channel states

    NASA Astrophysics Data System (ADS)

    James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2017-06-01

    One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.

  13. A carbon CT system: how to obtain accurate stopping power ratio using a Bragg peak reduction technique

    NASA Astrophysics Data System (ADS)

    Lee, Sung Hyun; Sunaguchi, Naoki; Hirano, Yoshiyuki; Kano, Yosuke; Liu, Chang; Torikoshi, Masami; Ohno, Tatsuya; Nakano, Takashi; Kanai, Tatsuaki

    2018-02-01

    In this study, we investigate the performance of the Gunma University Heavy Ion Medical Center’s ion computed tomography (CT) system, which measures the residual range of a carbon-ion beam using a fluoroscopy screen, a charge-coupled-device camera, and a moving wedge absorber and collects CT reconstruction images from each projection angle. Each 2D image was obtained by changing the polymethyl methacrylate (PMMA) thickness, such that all images for one projection could be expressed as the depth distribution in PMMA. The residual range as a function of PMMA depth was related to the range in water through a calibration factor, which was determined by comparing the PMMA-equivalent thickness measured by the ion CT system to the water-equivalent thickness measured by a water column. Aluminium, graphite, PMMA, and five biological phantoms were placed in a sample holder, and the residual range for each was quantified simultaneously. A novel method of CT reconstruction to correct for the angular deflection of incident carbon ions in the heterogeneous region utilising the Bragg peak reduction (BPR) is also introduced in this paper, and its performance is compared with other methods present in the literature such as the decomposition and differential methods. Stopping power ratio values derived with the BPR method from carbon-ion CT images matched closely with the true water-equivalent length values obtained from the validation slab experiment.

  14. Time-Accurate Numerical Prediction of Free Flight Aerodynamics of a Finned Projectile

    DTIC Science & Technology

    2005-09-01

    develop (with fewer dollars) more lethal and effective munitions. The munitions must stay abreast of the latest technology available to our...consuming. Computer simulations can and have provided an effective means of determining the unsteady aerodynamics and flight mechanics of guided projectile...Recently, the time-accurate technique was used to obtain improved results for Magnus moment and roll damping moment of a spinning projectile at transonic

  15. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  16. A simple threshold rule is sufficient to explain sophisticated collective decision-making.

    PubMed

    Robinson, Elva J H; Franks, Nigel R; Ellis, Samuel; Okuda, Saki; Marshall, James A R

    2011-01-01

    Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis), in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency) effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.

  17. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  18. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  19. Countercurrent chromatography separation of saponins by skeleton type from Ampelozizyphus amazonicus for off-line ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry analysis and characterisation.

    PubMed

    de Souza Figueiredo, Fabiana; Celano, Rita; de Sousa Silva, Danila; das Neves Costa, Fernanda; Hewitson, Peter; Ignatova, Svetlana; Piccinelli, Anna Lisa; Rastrelli, Luca; Guimarães Leitão, Suzana; Guimarães Leitão, Gilda

    2017-01-20

    Ampelozizyphus amazonicus Ducke (Rhamnaceae), a medicinal plant used to prevent malaria, is a climbing shrub, native to the Amazonian region, with jujubogenin glycoside saponins as main compounds. The crude extract of this plant is too complex for any kind of structural identification, and HPLC separation was not sufficient to resolve this issue. Therefore, the aim of this work was to obtain saponin enriched fractions from the bark ethanol extract by countercurrent chromatography (CCC) for further isolation and identification/characterisation of the major saponins by HPLC and MS. The butanol extract was fractionated by CCC with hexane - ethyl acetate - butanol - ethanol - water (1:6:1:1:6; v/v) solvent system yielding 4 group fractions. The collected fractions were analysed by UHPLC-HRMS (ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry) and MS n . Group 1 presented mainly oleane type saponins, and group 3 showed mainly jujubogenin glycosides, keto-dammarane type triterpene saponins and saponins with C 31 skeleton. Thus, CCC separated saponins from the butanol-rich extract by skeleton type. A further purification of group 3 by CCC (ethyl acetate - ethanol - water (1:0.2:1; v/v)) and HPLC-RI was performed in order to obtain these unusual aglycones in pure form. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    NASA Astrophysics Data System (ADS)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  1. The contribution of an asthma diagnostic consultation service in obtaining an accurate asthma diagnosis for primary care patients: results of a real-life study.

    PubMed

    Gillis, R M E; van Litsenburg, W; van Balkom, R H; Muris, J W; Smeenk, F W

    2017-05-19

    Previous studies showed that general practitioners have problems in diagnosing asthma accurately, resulting in both under and overdiagnosis. To support general practitioners in their diagnostic process, an asthma diagnostic consultation service was set up. We evaluated the performance of this asthma diagnostic consultation service by analysing the (dis)concordance between the general practitioners working hypotheses and the asthma diagnostic consultation service diagnoses and possible consequences this had on the patients' pharmacotherapy. In total 659 patients were included in this study. At this service the patients' medical history was taken and a physical examination and a histamine challenge test were carried out. We compared the general practitioners working hypotheses with the asthma diagnostic consultation service diagnoses and the change in medication that was incurred. In 52% (n = 340) an asthma diagnosis was excluded. The diagnosis was confirmed in 42% (n = 275). Furthermore, chronic rhinitis was diagnosed in 40% (n = 261) of the patients whereas this was noted in 25% (n = 163) by their general practitioner. The adjusted diagnosis resulted in a change of medication for more than half of all patients. In 10% (n = 63) medication was started because of a new asthma diagnosis. The 'one-stop-shop' principle was met with 53% of patients and 91% (n = 599) were referred back to their general practitioner, mostly within 6 months. Only 6% (n = 41) remained under control of the asthma diagnostic consultation service because of severe unstable asthma. In conclusion, the asthma diagnostic consultation service helped general practitioners significantly in setting accurate diagnoses for their patients with an asthma hypothesis. This may contribute to diminish the problem of over and underdiagnosis and may result in more appropriate treatment regimens. SERVICE HELPS GENERAL PRACTITIONERS MAKE ACCURATE DIAGNOSES: A consultation service can

  2. Can New Zealand achieve self-sufficiency in its nursing workforce?

    PubMed

    North, Nicola

    2011-01-01

    This paper reviews impacts on the nursing workforce of health policy and reforms of the past two decades and suggests reasons for both current difficulties in retaining nurses in the workforce and measures to achieve short-term improvements. Difficulties in retaining nurses in the New Zealand workforce have contributed to nursing shortages, leading to a dependence on overseas recruitment. In a context of global shortages and having to compete in a global nursing labour market, an alternative to dependence on overseas nurses is self-sufficiency. Discursive paper. Analysis of nursing workforce data highlighted threats to self-sufficiency, including age structure, high rates of emigration of New Zealand nurses with reliance on overseas nurses and an annual output of nurses that is insufficient to replace both expected retiring nurses and emigrating nurses. A review of recent policy and other documents indicates that two decades of health reform and lack of a strategic focus on nursing has contributed to shortages. Recent strategic approaches to the nursing workforce have included workforce stocktakes, integrated health workforce development and nursing workforce projections, with a single authority now responsible for planning, education, training and development for all health professions and sectors. Current health and nursing workforce development strategies offer wide-ranging and ambitious approaches. An alternative approach is advocated: based on workforce data analysis, pressing threats to self-sufficiency and measures available are identified to achieve, in the short term, the maximum impact on retaining nurses. A human resources in health approach is recommended that focuses on employment conditions and professional nursing as well as recruitment and retention strategies. Nursing is identified as 'crucial' to meeting demands for health care. A shortage of nurses threatens delivery of health services and supports the case for self-sufficiency in the nursing

  3. Portfolio of Research in Welfare and Family Self-Sufficiency: FY 2014. OPRE Report 2015-15

    ERIC Educational Resources Information Center

    Administration for Children & Families, 2015

    2015-01-01

    The Division of Economic Independence within the Office of Planning, Research and Evaluation (OPRE) has primary responsibility for welfare and family self-sufficiency research. OPRE's research in the area of welfare and family self-sufficiency is designed to expand knowledge about effective programs to promote employment, self-sufficiency, and…

  4. The Data Evaluation for Obtaining Accuracy and Reliability

    NASA Astrophysics Data System (ADS)

    Kim, Chang Geun; Chae, Kyun Shik; Lee, Sang Tae; Bhang, Gun Woong

    2012-11-01

    Nemours scientific measurement results are flooded from the paper, data book, etc. as fast growing of internet. We meet many different measurement results on the same measurand. In this moment, we are face to choose most reliable one out of them. But it is not easy to choose and use the accurate and reliable data as we do at an ice cream parlor. Even expert users feel difficult to distinguish the accurate and reliable scientific data from huge amount of measurement results. For this reason, the data evaluation is getting more important as the fast growing of internet and globalization. Furthermore the expressions of measurement results are not in standardi-zation. As these need, the international movement has been enhanced. At the first step, the global harmonization of terminology used in metrology and the expression of uncertainty in measurement were published in ISO. These methods are wide spread to many area of science on their measurement to obtain the accuracy and reliability. In this paper, it is introduced that the GUM, SRD and data evaluation on atomic collisions.

  5. Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.

    PubMed

    Tao, Jianmin; Mo, Yuxiang

    2016-08-12

    Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.

  6. How, When, and Where? Assessing Renewable Energy Self-Sufficiency at the Neighborhood Level.

    PubMed

    Grosspietsch, David; Thömmes, Philippe; Girod, Bastien; Hoffmann, Volker H

    2018-02-20

    Self-sufficient decentralized systems challenge the centralized energy paradigm. Although scholars have assessed specific locations and technological aspects, it remains unclear how, when, and where energy self-sufficiency could become competitive. To address this gap, we develop a techno-economic model for energy self-sufficient neighborhoods that integrates solar photovoltaics (PV), conversion, and storage technologies. We assess the cost of 100% self-sufficiency for both electricity and heat, comparing different technical configurations for a stylized neighborhood in Switzerland and juxtaposing these findings with projections on market and technology development. We then broaden the scope and vary the neighborhood's composition (residential share) and geographic position (along different latitudes). Regarding how to design self-sufficient neighborhoods, we find two promising technical configurations. The "PV-battery-hydrogen" configuration is projected to outperform a fossil-fueled and grid-connected reference configuration when energy prices increase by 2.5% annually and cost reductions in hydrogen-related technologies by a factor of 2 are achieved. The "PV-battery" configuration would allow achieving parity with the reference configuration sooner, at 21% cost reduction. Additionally, more cost-efficient deployment is found in neighborhoods where the end-use is small commercial or mixed and in regions where seasonal fluctuations are low and thus allow for reducing storage requirements.

  7. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    NASA Astrophysics Data System (ADS)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our

  8. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  9. Entrepreneurship by any other name: self-sufficiency versus innovation.

    PubMed

    Parker Harris, Sarah; Caldwell, Kate; Renko, Maija

    2014-01-01

    Entrepreneurship has been promoted as an innovative strategy to address the employment of people with disabilities. Research has predominantly focused on the self-sufficiency aspect without fully integrating entrepreneurship literature in the areas of theory, systems change, and demonstration projects. Subsequently there are gaps in services, policies, and research in this field that, in turn, have limited our understanding of the support needs and barriers or facilitators of entrepreneurs with disabilities. A thorough analysis of the literature in these areas led to the development of two core concepts that need to be addressed in integrating entrepreneurship into disability employment research and policy: clarity in operational definitions and better disability statistics and outcome measures. This article interrogates existing research and policy efforts in this regard to argue for a necessary shift in the field from focusing on entrepreneurship as self-sufficiency to understanding entrepreneurship as innovation.

  10. Greater learnability is not sufficient to produce cultural universals.

    PubMed

    Rafferty, Anna N; Griffiths, Thomas L; Ettlinger, Marc

    2013-10-01

    Looking across human societies reveals regularities in the languages that people speak and the concepts that they use. One explanation that has been proposed for these "cultural universals" is differences in the ease with which people learn particular languages and concepts. A difference in learnability means that languages and concepts possessing a particular property are more likely to be accurately transmitted from one generation of learners to the next. Intuitively, this difference could allow languages and concepts that are more learnable to become more prevalent after multiple generations of cultural transmission. If this is the case, the prevalence of languages and concepts with particular properties can be explained simply by demonstrating empirically that they are more learnable. We evaluate this argument using mathematical analysis and behavioral experiments. Specifically, we provide two counter-examples that show how greater learnability need not result in a property becoming prevalent. First, more learnable languages and concepts can nonetheless be less likely to be produced spontaneously as a result of transmission failures. We simulated cultural transmission in the laboratory to show that this can occur for memory of distinctive items: these items are more likely to be remembered, but not generated spontaneously once they have been forgotten. Second, when there are many languages or concepts that lack the more learnable property, sheer numbers can swamp the benefit produced by greater learnability. We demonstrate this using a second series of experiments involving artificial language learning. Both of these counter-examples show that simply finding a learnability bias experimentally is not sufficient to explain why a particular property is prevalent in the languages or concepts used in human societies: explanations for cultural universals based on cultural transmission need to consider the full set of hypotheses a learner could entertain and all of

  11. Accurate assessment and identification of naturally occurring cellular cobalamins.

    PubMed

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V; Moreira, Edward S; Brasch, Nicola E; Jacobsen, Donald W

    2008-01-01

    Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo beta-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Experiments were designed to: 1) assess beta-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable beta-axial ligands. The cobalamin profile of cells grown in the presence of [ 57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [ 57Co]-aquacobalamin, [ 57Co]-glutathionylcobalamin, [ 57Co]-sulfitocobalamin, [ 57Co]-cyanocobalamin, [ 57Co]-adenosylcobalamin, [ 57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalaminacting as a scavenger cobalamin (i.e. "cold trapping"), the recovery of both [ 57Co]-glutathionylcobalamin and [ 57Co]-sulfitocobalamin decreases to low but consistent levels. In contrasts, the [ 57Co]-nitrocobalamin observed in the extracts prepared without excess aquacobalamin is undetected in extracts prepared with cold trapping. This demonstrates that beta-ligand exchange occur with non-covalently bound beta-ligands. The exception to this observation is cyanocobalamin with a non-exchangeable CN- group. It is now possible to obtain accurate profiles of cellular cobalamin.

  12. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  13. Accurate Valence Ionization Energies from Kohn-Sham Eigenvalues with the Help of Potential Adjustors.

    PubMed

    Thierbach, Adrian; Neiss, Christian; Gallandi, Lukas; Marom, Noa; Körzdörfer, Thomas; Görling, Andreas

    2017-10-10

    An accurate yet computationally very efficient and formally well justified approach to calculate molecular ionization potentials is presented and tested. The first as well as higher ionization potentials are obtained as the negatives of the Kohn-Sham eigenvalues of the neutral molecule after adjusting the eigenvalues by a recently [ Görling Phys. Rev. B 2015 , 91 , 245120 ] introduced potential adjustor for exchange-correlation potentials. Technically the method is very simple. Besides a Kohn-Sham calculation of the neutral molecule, only a second Kohn-Sham calculation of the cation is required. The eigenvalue spectrum of the neutral molecule is shifted such that the negative of the eigenvalue of the highest occupied molecular orbital equals the energy difference of the total electronic energies of the cation minus the neutral molecule. For the first ionization potential this simply amounts to a ΔSCF calculation. Then, the higher ionization potentials are obtained as the negatives of the correspondingly shifted Kohn-Sham eigenvalues. Importantly, this shift of the Kohn-Sham eigenvalue spectrum is not just ad hoc. In fact, it is formally necessary for the physically correct energetic adjustment of the eigenvalue spectrum as it results from ensemble density-functional theory. An analogous approach for electron affinities is equally well obtained and justified. To illustrate the practical benefits of the approach, we calculate the valence ionization energies of test sets of small- and medium-sized molecules and photoelectron spectra of medium-sized electron acceptor molecules using a typical semilocal (PBE) and two typical global hybrid functionals (B3LYP and PBE0). The potential adjusted B3LYP and PBE0 eigenvalues yield valence ionization potentials that are in very good agreement with experimental values, reaching an accuracy that is as good as the best G 0 W 0 methods, however, at much lower computational costs. The potential adjusted PBE eigenvalues result in

  14. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  15. Accurate phylogenetic classification of DNA fragments based onsequence composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequencemore » characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.« less

  16. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  17. The determination of accurate dipole polarizabilities alpha and gamma for the noble gases

    NASA Technical Reports Server (NTRS)

    Rice, Julia E.; Taylor, Peter R.; Lee, Timothy J.; Almlof, Jan

    1991-01-01

    Accurate static dipole polarizabilities alpha and gamma of the noble gases He through Xe were determined using wave functions of similar quality for each system. Good agreement with experimental data for the static polarizability gamma was obtained for Ne and Xe, but not for Ar and Kr. Calculations suggest that the experimental values for these latter ions are too low.

  18. Anchoring the Population II Distance Scale: Accurate Ages for Globular Clusters

    NASA Technical Reports Server (NTRS)

    Chaboyer, Brian C.; Chaboyer, Brian C.; Carney, Bruce W.; Latham, David W.; Dunca, Douglas; Grand, Terry; Layden, Andy; Sarajedini, Ataollah; McWilliam, Andrew; Shao, Michael

    2004-01-01

    The metal-poor stars in the halo of the Milky Way galaxy were among the first objects formed in our Galaxy. These Population II stars are the oldest objects in the universe whose ages can be accurately determined. Age determinations for these stars allow us to set a firm lower limit, to the age of the universe and to probe the early formation history of the Milky Way. The age of the universe determined from studies of Population II stars may be compared to the expansion age of the universe and used to constrain cosmological models. The largest uncertainty in estimates for the ages of stars in our halo is due to the uncertainty in the distance scale to Population II objects. We propose to obtain accurate parallaxes to a number of Population II objects (globular clusters and field stars in the halo) resulting in a significant improvement in the Population II distance scale and greatly reducing the uncertainty in the estimated ages of the oldest stars in our galaxy. At the present time, the oldest stars are estimated to be 12.8 Gyr old, with an uncertainty of approx. 15%. The SIM observations obtained by this key project, combined with the supporting theoretical research and ground based observations outlined in this proposal will reduce the estimated uncertainty in the age estimates to 5%).

  19. Online Learning in Higher Education: Necessary and Sufficient Conditions

    ERIC Educational Resources Information Center

    Lim, Cher Ping

    2005-01-01

    The spectacular development of information and communication technologies through the Internet has provided opportunities for students to explore the virtual world of information. In this article, the author discusses the necessary and sufficient conditions for successful online learning in educational institutions. The necessary conditions…

  20. When is Information Sufficient for Action Search with Unreliable Yet Informative Intelligence

    DTIC Science & Technology

    2016-03-30

    information: http://pubsonline.informs.org When Is Information Sufficient for Action? Search with Unreliable yet Informative Intelligence Michael Atkinson... Search with Unreliable yet Informative Intelligence. Operations Research Published online in Articles in Advance 30 Mar 2016 . http://dx.doi.org/10.1287...print) � ISSN 1526-5463 (online) http://dx.doi.org/10.1287/opre.2016.1488 © 2016 INFORMS When Is Information Sufficient for Action? Search with

  1. Role of self-sufficiency, productivity and diversification on the economic sustainability of farming systems with autochthonous sheep breeds in less favoured areas in Southern Europe.

    PubMed

    Ripoll-Bosch, R; Joy, M; Bernués, A

    2014-08-01

    Traditional mixed livestock cereal- and pasture-based sheep farming systems in Europe are threatened by intensification and specialisation processes. However, the intensification process does not always yield improved economic results or efficiency. This study involved a group of farmers that raised an autochthonous sheep breed (Ojinegra de Teruel) in an unfavourable area of North-East Spain. This study aimed to typify the farms and elucidate the existing links between economic performance and certain sustainability indicators (i.e. productivity, self-sufficiency and diversification). Information was obtained through direct interviews with 30 farms (73% of the farmers belonging to the breeders association). Interviews were conducted in 2009 and involved 32 indicators regarding farm structure, management and economic performance. With a principal component analysis, three factors were obtained explaining 77.9% of the original variance. This factors were named as inputs/self-sufficiency, which included the use of on-farm feeds, the amount of variable costs per ewe and economic performance; productivity, which included lamb productivity and economic autonomy; and productive orientation, which included the degree of specialisation in production. A cluster analysis identified the following four groups of farms: high-input intensive system; low-input self-sufficient system; specialised livestock system; and diversified crops-livestock system. In conclusion, despite the large variability between and within groups, the following factors that explain the economic profitability of farms were identified: (i) high feed self-sufficiency and low variable costs enhance the economic performance (per labour unit) of the farms; (ii) animal productivity reduces subsidy dependence, but does not necessarily imply better economic performance; and (iii) diversity of production enhances farm flexibility, but is not related to economic performance.

  2. Accurate respiration measurement using DC-coupled continuous-wave radar sensor for motion-adaptive cancer radiotherapy.

    PubMed

    Gu, Changzhan; Li, Ruijiang; Zhang, Hualiang; Fung, Albert Y C; Torres, Carlos; Jiang, Steve B; Li, Changzhi

    2012-11-01

    Accurate respiration measurement is crucial in motion-adaptive cancer radiotherapy. Conventional methods for respiration measurement are undesirable because they are either invasive to the patient or do not have sufficient accuracy. In addition, measurement of external respiration signal based on conventional approaches requires close patient contact to the physical device which often causes patient discomfort and undesirable motion during radiation dose delivery. In this paper, a dc-coupled continuous-wave radar sensor was presented to provide a noncontact and noninvasive approach for respiration measurement. The radar sensor was designed with dc-coupled adaptive tuning architectures that include RF coarse-tuning and baseband fine-tuning, which allows the radar sensor to precisely measure movement with stationary moment and always work with the maximum dynamic range. The accuracy of respiration measurement with the proposed radar sensor was experimentally evaluated using a physical phantom, human subject, and moving plate in a radiotherapy environment. It was shown that respiration measurement with radar sensor while the radiation beam is on is feasible and the measurement has a submillimeter accuracy when compared with a commercial respiration monitoring system which requires patient contact. The proposed radar sensor provides accurate, noninvasive, and noncontact respiration measurement and therefore has a great potential in motion-adaptive radiotherapy.

  3. Assessing sufficient capability: A new approach to economic evaluation.

    PubMed

    Mitchell, Paul Mark; Roberts, Tracy E; Barton, Pelham M; Coast, Joanna

    2015-08-01

    Amartya Sen's capability approach has been discussed widely in the health economics discipline. Although measures have been developed to assess capability in economic evaluation, there has been much less attention paid to the decision rules that might be applied alongside. Here, new methods, drawing on the multidimensional poverty and health economics literature, are developed for conducting economic evaluation within the capability approach and focusing on an objective of achieving "sufficient capability". This objective more closely reflects the concern with equity that pervades the capability approach and the method has the advantage of retaining the longitudinal aspect of estimating outcome that is associated with quality-adjusted life years (QALYs), whilst also drawing on notions of shortfall associated with assessments of poverty. Economic evaluation from this perspective is illustrated in an osteoarthritis patient group undergoing joint replacement, with capability wellbeing assessed using ICECAP-O. Recommendations for taking the sufficient capability approach forward are provided. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Accurate fluid force measurement based on control surface integration

    NASA Astrophysics Data System (ADS)

    Lentink, David

    2018-01-01

    Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non

  5. 76 FR 55407 - Announcement of Funding Awards; Public and Indian Housing Family Self-Sufficiency Program Under...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-07

    ...; Public and Indian Housing Family Self-Sufficiency Program Under the Resident Opportunity and Self... (NOFA) for the Public and Indian Housing Family Self-Sufficiency Program Under the Resident Opportunity.... Appendix A--List of Public and Indian Housing Family Self-Sufficiency Program Under the Resident...

  6. Towards a Sufficient Theory of Transition in Cognitive Development.

    ERIC Educational Resources Information Center

    Wallace, J. G.

    The work reported aims at the construction of a sufficient theory of transition in cognitive development. The method of theory construction employed is computer simulation of cognitive process. The core of the model of transition presented comprises self-modification processes that, as a result of continuously monitoring an exhaustive record of…

  7. Accurate quantum yields by laser gain vs absorption spectroscopy - Investigation of Br/Br(asterisk) channels in photofragmentation of Br2 and IBr

    NASA Technical Reports Server (NTRS)

    Haugen, H. K.; Weitz, E.; Leone, S. R.

    1985-01-01

    Various techniques have been used to study photodissociation dynamics of the halogens and interhalogens. The quantum yields obtained by these techniques differ widely. The present investigation is concerned with a qualitatively new approach for obtaining highly accurate quantum yields for electronically excited states. This approach makes it possible to obtain an accuracy of 1 percent to 3 percent. It is shown that measurement of the initial transient gain/absorption vs the final absorption in a single time-resolved signal is a very accurate technique in the study of absolute branching fractions in photodissociation. The new technique is found to be insensitive to pulse and probe laser characteristics, molecular absorption cross sections, and absolute precursor density.

  8. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  9. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  10. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  11. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  12. RICO: A NEW APPROACH FOR FAST AND ACCURATE REPRESENTATION OF THE COSMOLOGICAL RECOMBINATION HISTORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendt, W. A.; Wandelt, B. D.; Chluba, J.

    2009-04-15

    We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z {approx}< 900), much of the net change in the ionization fraction can be captured by loweringmore » the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in {approx}10 ms, a speedup of at least 10{sup 6} over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.« less

  13. Psychometric properties of the Dutch version of the self-sufficiency matrix (SSM-D).

    PubMed

    Fassaert, Thijs; Lauriks, Steve; van de Weerd, Stef; Theunissen, Jan; Kikkert, Martijn; Dekker, Jack; Buster, Marcel; de Wit, Matty

    2014-07-01

    Measuring treatment outcomes can be challenging in patients who experience multiple interlinked problems, as is the case in public mental health care (PMHC). This study describes the development and psychometric properties of a Dutch version of the self-sufficiency matrix (SSM-D), an instrument that measures outcomes and originates from the US. In two different settings, clients were rated using the SSM-D in combination with the Health of the Nation Outcome Scales (HoNOS) and the Camberwell assessment of need short appraisal schedule (CANSAS). The results provided support for adequate psychometric properties of the SSM-D. The SSM-D had a solid single factor structure and internal consistency of the scale was excellent. In addition, convergent validity of the SSM-D was indicated by strong correlations between HoNOS and CANSAS, as well as between several subdomains. Further research is needed to establish whether the results presented here can be obtained in other PMHC settings.

  14. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  15. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and

  16. Is the Supply of Mathematics and Science Teachers Sufficient?

    ERIC Educational Resources Information Center

    Ingersoll, Richard M.; Perda, David

    2010-01-01

    This study seeks to empirically ground the debate over mathematics and science teacher shortages and evaluate the extent to which there is, or is not, sufficient supply of teachers in these fields. The authors' analyses of nationally representative data from multiple sources show that math and science are the fields most difficult to staff, but…

  17. AOP: An R Package For Sufficient Causal Analysis in Pathway ...

    EPA Pesticide Factsheets

    Summary: How can I quickly find the key events in a pathway that I need to monitor to predict that a/an beneficial/adverse event/outcome will occur? This is a key question when using signaling pathways for drug/chemical screening in pharma-cology, toxicology and risk assessment. By identifying these sufficient causal key events, we have fewer events to monitor for a pathway, thereby decreasing assay costs and time, while maximizing the value of the information. I have developed the “aop” package which uses backdoor analysis of causal net-works to identify these minimal sets of key events that are suf-ficient for making causal predictions. Availability and Implementation: The source and binary are available online through the Bioconductor project (http://www.bioconductor.org/) as an R package titled “aop”. The R/Bioconductor package runs within the R statistical envi-ronment. The package has functions that can take pathways (as directed graphs) formatted as a Cytoscape JSON file as input, or pathways can be represented as directed graphs us-ing the R/Bioconductor “graph” package. The “aop” package has functions that can perform backdoor analysis to identify the minimal set of key events for making causal predictions.Contact: burgoon.lyle@epa.gov This paper describes an R/Bioconductor package that was developed to facilitate the identification of key events within an AOP that are the minimal set of sufficient key events that need to be tested/monit

  18. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning

    PubMed Central

    Silva, Susana F.; Domingues, José Paulo

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed. PMID:29599938

  19. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    PubMed

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  20. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  1. Accurate potential drop sheet resistance measurements of laser-doped areas in semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinrich, Martin, E-mail: mh.seris@gmail.com; NUS Graduate School for Integrative Science and Engineering, National University of Singapore, Singapore 117456; Kluska, Sven

    2014-10-07

    It is investigated how potential drop sheet resistance measurements of areas formed by laser-assisted doping in crystalline Si wafers are affected by typically occurring experimental factors like sample size, inhomogeneities, surface roughness, or coatings. Measurements are obtained with a collinear four point probe setup and a modified transfer length measurement setup to measure sheet resistances of laser-doped lines. Inhomogeneities in doping depth are observed from scanning electron microscope images and electron beam induced current measurements. It is observed that influences from sample size, inhomogeneities, surface roughness, and coatings can be neglected if certain preconditions are met. Guidelines are given onmore » how to obtain accurate potential drop sheet resistance measurements on laser-doped regions.« less

  2. How Accurate Are Transition States from Simulations of Enzymatic Reactions?

    PubMed Central

    2015-01-01

    The rate expression of traditional transition state theory (TST) assumes no recrossing of the transition state (TS) and thermal quasi-equilibrium between the ground state and the TS. Currently, it is not well understood to what extent these assumptions influence the nature of the activated complex obtained in traditional TST-based simulations of processes in the condensed phase in general and in enzymes in particular. Here we scrutinize these assumptions by characterizing the TSs for hydride transfer catalyzed by the enzyme Escherichia coli dihydrofolate reductase obtained using various simulation approaches. Specifically, we compare the TSs obtained with common TST-based methods and a dynamics-based method. Using a recently developed accurate hybrid quantum mechanics/molecular mechanics potential, we find that the TST-based and dynamics-based methods give considerably different TS ensembles. This discrepancy, which could be due equilibrium solvation effects and the nature of the reaction coordinate employed and its motion, raises major questions about how to interpret the TSs determined by common simulation methods. We conclude that further investigation is needed to characterize the impact of various TST assumptions on the TS phase-space ensemble and on the reaction kinetics. PMID:24860275

  3. Is self-sufficiency financially viable and ethically justifiable?--a commercial viewpoint.

    PubMed

    Christie, R B

    1994-12-01

    Manufacturers of blood products have to maintain the highest possible standards for plasma screening and good manufacturing practices to ensure maximum purity and viral safety. The private sector companies have much experience in implementing and complying with national and international regulations. These requirements involve considerable cost in the areas of (1) plasma collection facilities, (2) research and clinical research, (3) manufacture, and (4) quality control. Total self-sufficiency would mean the loss of many existing resources. An alternative would be a collaboration between the public and private sectors to meet the needs of all patients who require plasma derived products. The current definition of self-sufficiency suggests that it is not financially viable.

  4. Accurate Determination of the Frequency Response Function of Submerged and Confined Structures by Using PZT-Patches†.

    PubMed

    Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias

    2017-03-22

    To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating

  5. Seven Golden Rules for heuristic filtering of molecular formulas obtained by accurate mass spectrometry

    PubMed Central

    Kind, Tobias; Fiehn, Oliver

    2007-01-01

    Background Structure elucidation of unknown small molecules by mass spectrometry is a challenge despite advances in instrumentation. The first crucial step is to obtain correct elemental compositions. In order to automatically constrain the thousands of possible candidate structures, rules need to be developed to select the most likely and chemically correct molecular formulas. Results An algorithm for filtering molecular formulas is derived from seven heuristic rules: (1) restrictions for the number of elements, (2) LEWIS and SENIOR chemical rules, (3) isotopic patterns, (4) hydrogen/carbon ratios, (5) element ratio of nitrogen, oxygen, phosphor, and sulphur versus carbon, (6) element ratio probabilities and (7) presence of trimethylsilylated compounds. Formulas are ranked according to their isotopic patterns and subsequently constrained by presence in public chemical databases. The seven rules were developed on 68,237 existing molecular formulas and were validated in four experiments. First, 432,968 formulas covering five million PubChem database entries were checked for consistency. Only 0.6% of these compounds did not pass all rules. Next, the rules were shown to effectively reducing the complement all eight billion theoretically possible C, H, N, S, O, P-formulas up to 2000 Da to only 623 million most probable elemental compositions. Thirdly 6,000 pharmaceutical, toxic and natural compounds were selected from DrugBank, TSCA and DNP databases. The correct formulas were retrieved as top hit at 80–99% probability when assuming data acquisition with complete resolution of unique compounds and 5% absolute isotope ratio deviation and 3 ppm mass accuracy. Last, some exemplary compounds were analyzed by Fourier transform ion cyclotron resonance mass spectrometry and by gas chromatography-time of flight mass spectrometry. In each case, the correct formula was ranked as top hit when combining the seven rules with database queries. Conclusion The seven rules enable an

  6. An approach for accurate simulation of liquid mixing in a T-shaped micromixer.

    PubMed

    Matsunaga, Takuya; Lee, Ho-Joon; Nishino, Koichi

    2013-04-21

    In this paper, we propose a new computational method for efficient evaluation of the fluid mixing behaviour in a T-shaped micromixer with a rectangular cross section at high Schmidt number under steady state conditions. Our approach enables a low-cost high-quality simulation based on tracking of fluid particles for convective fluid mixing and posterior solving of a model of the species equation for molecular diffusion. The examined parameter range is Re = 1.33 × 10(-2) to 240 at Sc = 3600. The proposed method is shown to simulate well the mixing quality even in the engulfment regime, where the ordinary grid-based simulation is not able to obtain accurate solutions with affordable mesh sizes due to the numerical diffusion at high Sc. The obtained results agree well with a backward random-walk Monte Carlo simulation, by which the accuracy of the proposed method is verified. For further investigation of the characteristics of the proposed method, the Sc dependency is examined in a wide range of Sc from 10 to 3600 at Re = 200. The study reveals that the model discrepancy error emerges more significantly in the concentration distribution at lower Sc, while the resulting mixing quality is accurate over the entire range.

  7. An accurate automated technique for quasi-optics measurement of the microwave diagnostics for fusion plasma

    NASA Astrophysics Data System (ADS)

    Hu, Jianqiang; Liu, Ahdi; Zhou, Chu; Zhang, Xiaohui; Wang, Mingyuan; Zhang, Jin; Feng, Xi; Li, Hong; Xie, Jinlin; Liu, Wandong; Yu, Changxuan

    2017-08-01

    A new integrated technique for fast and accurate measurement of the quasi-optics, especially for the microwave/millimeter wave diagnostic systems of fusion plasma, has been developed. Using the LabVIEW-based comprehensive scanning system, we can realize not only automatic but also fast and accurate measurement, which will help to eliminate the effects of temperature drift and standing wave/multi-reflection. With the Matlab-based asymmetric two-dimensional Gaussian fitting method, all the desired parameters of the microwave beam can be obtained. This technique can be used in the design and testing of microwave diagnostic systems such as reflectometers and the electron cyclotron emission imaging diagnostic systems of the Experimental Advanced Superconducting Tokamak.

  8. Susceptibility patterns for amoxicillin/clavulanate tests mimicking the licensed formulations and pharmacokinetic relationships: do the MIC obtained with 2:1 ratio testing accurately reflect activity against beta-lactamase-producing strains of Haemophilus influenzae and Moraxella catarrhalis?

    PubMed

    Pottumarthy, Sudha; Sader, Helio S; Fritsche, Thomas R; Jones, Ronald N

    2005-11-01

    Amoxicillin/clavulanate has recently undergone formulation changes (XR and ES-600) that represent 14:1 and 16:1 ratios of amoxicillin/clavulanate. These ratios greatly differ from the 2:1 ratio used in initial formulations and in vitro susceptibility testing. The objective of this study was to determine if the reference method using a 2:1 ratio accurately reflects the susceptibility to the various clinically used amoxicillin/clavulanate formulations and their respective serum concentration ratios. A collection of 330 Haemophilus influenzae strains (300 beta-lactamase-positive and 30 beta-lactamase-negative) and 40 Moraxella catarrhalis strains (30 beta-lactamase-positive and 10 beta-lactamase-negative) were tested by the broth microdilution method against eight amoxicillin/clavulanate combinations (4:1, 5:1, 7:1, 9:1, 14:1, and 16:1 ratios; 0.5 and 2 microg/mL fixed clavulanate concentrations) and the minimum inhibitory concentration (MIC) results were compared with those obtained with the reference 2:1 ratio testing. For the beta-lactamase-negative strains of both genera, there was no demonstrable change in the MIC values obtained for all ratios analyzed (2:1 to 16:1). For the beta-lactamase-positive strains of H. influenzae and M. catarrhalis, at ratios >or=4:1 there was a shift in the central tendency of the MIC scatterplot compared with the results of testing 2:1 ratio. As a result, there was a 2-fold dilution increase in the MIC(50) and MIC(90) values, most evident for H. influenzae and BRO-1-producing M. catarrhalis strains. For beta-lactamase-positive strains of H. influenzae, the shift resulted in a change in the interpretive result for 3 isolates (1.0%) from susceptible using the reference method (2:1 ratio) to resistant (8/4 microg/mL; very major error) at the 16:1 ratio. In addition, the number of isolates with MIC values at or 1 dilution lower than the breakpoint (4/2 microg/mL) increased from 5% at 2:1 ratio to 32-33% for ratios 14:1 and 16:1. Our

  9. Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation

    PubMed Central

    Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon

    2005-01-01

    Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRET-induced signal, and for differences in the detection efficiency and quantum yield of the probes. ALEX yields accurate FRET independent of instrumental factors, such as excitation intensity or detector alignment. Using DNA fragments, we showed that ALEX-based distances agree well with predictions from a cylindrical model of DNA; ALEX-based distances fit better to theory than distances obtained at the ensemble level. Distance measurements within transcription complexes agreed well with ensemble-FRET measurements, and with structural models based on ensemble-FRET and x-ray crystallography. ALEX can benefit structural analysis of biomolecules, especially when such molecules are inaccessible to conventional structural methods due to heterogeneity or transient nature. PMID:15653725

  10. A hybrid method for accurate star tracking using star sensor and gyros.

    PubMed

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  11. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  12. Development of improved enzyme-based and lateral flow immunoassays for rapid and accurate serodiagnosis of canine brucellosis.

    PubMed

    Cortina, María E; Novak, Analía; Melli, Luciano J; Elena, Sebastián; Corbera, Natalia; Romero, Juan E; Nicola, Ana M; Ugalde, Juan E; Comerci, Diego J; Ciocchini, Andrés E

    2017-09-01

    Brucellosis is a widespread zoonotic disease caused by Brucella spp. Brucella canis is the etiological agent of canine brucellosis, a disease that can lead to sterility in bitches and dogs causing important economic losses in breeding kennels. Early and accurate diagnosis of canine brucellosis is central to control the disease and lower the risk of transmission to humans. Here, we develop and validate enzyme and lateral flow immunoassays for improved serodiagnosis of canine brucellosis using as antigen the B. canis rough lipopolysaccharide (rLPS). The method used to obtain the rLPS allowed us to produce more homogeneous batches of the antigen that facilitated the standardization of the assays. To validate the assays, 284 serum samples obtained from naturally infected dogs and healthy animals were analyzed. For the B. canis-iELISA and B. canis-LFIA the diagnostic sensitivity was of 98.6%, and the specificity 99.5% and 100%, respectively. We propose the implementation of the B. canis-LFIA as a screening test in combination with the highly accurate laboratory g-iELISA. The B. canis-LFIA is a rapid, accurate and easy to use test, characteristics that make it ideal for the serological surveillance of canine brucellosis in the field or veterinary laboratories. Finally, a blind study including 1040 serum samples obtained from urban dogs showed a prevalence higher than 5% highlighting the need of new diagnostic tools for a more effective control of the disease in dogs and therefore to reduce the risk of transmission of this zoonotic pathogen to humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Accurate sub-millimetre rest frequencies for HOCO+ and DOCO+ ions

    NASA Astrophysics Data System (ADS)

    Bizzocchi, L.; Lattanzi, V.; Laas, J.; Spezzano, S.; Giuliano, B. M.; Prudenzano, D.; Endres, C.; Sipilä, O.; Caselli, P.

    2017-06-01

    Context. HOCO+ is a polar molecule that represents a useful proxy for its parent molecule CO2, which is not directly observable in the cold interstellar medium. This cation has been detected towards several lines of sight, including massive star forming regions, protostars, and cold cores. Despite the obvious astrochemical relevance, protonated CO2 and its deuterated variant, DOCO+, still lack an accurate spectroscopic characterisation. Aims: The aim of this work is to extend the study of the ground-state pure rotational spectra of HOCO+ and DOCO+ well into the sub-millimetre region. Methods: Ground-state transitions have been recorded in the laboratory using a frequency-modulation absorption spectrometer equipped with a free-space glow-discharge cell. The ions were produced in a low-density, magnetically confined plasma generated in a suitable gas mixture. The ground-state spectra of HOCO+ and DOCO+ have been investigated in the 213-967 GHz frequency range; 94 new rotational transitions have been detected. Additionally, 46 line positions taken from the literature have been accurately remeasured. Results: The newly measured lines have significantly enlarged the available data sets for HOCO+ and DOCO+, thus enabling the determination of highly accurate rotational and centrifugal distortion parameters. Our analysis shows that all HOCO+ lines with Ka ≥ 3 are perturbed by a ro-vibrational interaction that couples the ground state with the v5 = 1 vibrationally excited state. This resonance has been explicitly treated in the analysis in order to obtain molecular constants with clear physical meaning. Conclusions: The improved sets of spectroscopic parameters provide enhanced lists of very accurate sub-millimetre rest frequencies of HOCO+ and DOCO+ for astrophysical applications. These new data challenge a recent tentative identification of DOCO+ towards a pre-stellar core. Supplementary tables are only available at the CDS via anonymous ftp to http

  14. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    ERIC Educational Resources Information Center

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  15. 75 FR 16493 - Announcement of Funding Awards for the Public and Indian Housing Family Self-Sufficiency Program...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-01

    ... Awards for the Public and Indian Housing Family Self-Sufficiency Program Under the Resident Opportunity... (NOFA) for the Public and Indian Housing Family Self-Sufficiency Program under the Resident Opportunity... Public and Indian Housing Family Self-Sufficiency Program under the Resident Opportunity and Self...

  16. Evaluation of New Reference Genes in Papaya for Accurate Transcript Normalization under Different Experimental Conditions

    PubMed Central

    Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen

    2012-01-01

    Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions. PMID

  17. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  18. 77 FR 14540 - Announcement of Funding Awards for the Public and Indian Housing Family Self-Sufficiency Program...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-12

    ... Awards for the Public and Indian Housing Family Self-Sufficiency Program Under the Resident Opportunity... Funding Availability (NOFA) for the Public and Indian Housing Family Self-Sufficiency Program under the... amounts of the 238 awards made under the Public and Indian Housing Family Self-Sufficiency Program under...

  19. Accurate O2 delivery enabled benzene biodegradation through aerobic activation followed by denitrification-coupled mineralization.

    PubMed

    Liu, Zhuolin; Zhou, Chen; Ontiveros-Valencia, Aura; Luo, Yi-Hao; Long, Min; Xu, Hua; Rittmann, Bruce E

    2018-04-28

    Although benzene can be biodegraded when dissolved oxygen is sufficient, delivering oxygen is energy intensive and can lead to air stripping the benzene. Anaerobes can biodegrade benzene by using electron acceptors other than O 2 , and this may reduce costs and exposure risks; the drawback is a remarkably slower growth rate. We evaluated a two-step strategy that involved O 2 -dependent benzene activation and cleavage followed by intermediate oxidation coupled to NO 3 - respiration. We employed a membrane biofilm reactor (MBfR) featuring nonporous hollow fibers as the means to deliver O 2 directly to a biofilm at an accurately controlled rate. Benzene was mineralized aerobically when the O 2 -supply rate was more than sufficient for mineralization. As the O 2 -supply capacity was systematically lowered, O 2 respiration was gradually replaced by NO 3 - respiration. When the maximum O 2 -supply capacity was only 20% of the demand for benzene mineralization, O 2 was used almost exclusively for benzene activation and cleavage, while respiration was almost only by denitrification. Analyses of microbial community structure and predicted metagenomic function reveal that Burkholderiales was dominant and probably utilized monooxygenase activation, with subsequent mineralization coupled to denitrification; strict anaerobes capable of carboxylative activation were not detected. These results open the door for a promising treatment strategy that simultaneously ameliorates technical and economic challenges of aeration and slow kinetics of anaerobic activation of aromatics. © 2018 Wiley Periodicals, Inc.

  20. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  1. A time-accurate finite volume method valid at all flow velocities

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1993-01-01

    A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil

  2. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  3. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  4. Obtaining accurate glucose measurements from wild animals under field conditions: comparing a hand held glucometer with a standard laboratory technique in grey seals

    PubMed Central

    Turner, Lucy M.; Millward, Sebastian; Moss, Simon E. W.; Hall, Ailsa J.

    2017-01-01

    Abstract Glucose is an important metabolic fuel and circulating levels are tightly regulated in most mammals, but can drop when body fuel reserves become critically low. Glucose is mobilized rapidly from liver and muscle during stress in response to increased circulating cortisol. Blood glucose levels can thus be of value in conservation as an indicator of nutritional status and may be a useful, rapid assessment marker for acute or chronic stress. However, seals show unusual glucose regulation: circulating levels are high and insulin sensitivity is limited. Accurate blood glucose measurement is therefore vital to enable meaningful health and physiological assessments in captive, wild or rehabilitated seals and to explore its utility as a marker of conservation relevance in these animals. Point-of-care devices are simple, portable, relatively cheap and use less blood compared with traditional sampling approaches, making them useful in conservation-related monitoring. We investigated the accuracy of a hand-held glucometer for ‘instant’ field measurement of blood glucose, compared with blood drawing followed by laboratory testing, in wild grey seals (Halichoerus grypus), a species used as an indicator for Good Environmental Status in European waters. The glucometer showed high precision, but low accuracy, relative to laboratory measurements, and was least accurate at extreme values. It did not provide a reliable alternative to plasma analysis. Poor correlation between methods may be due to suboptimal field conditions, greater and more variable haematocrit, faster erythrocyte settling rate and/or lipaemia in seals. Glucometers must therefore be rigorously tested before use in new species and demographic groups. Sampling, processing and glucose determination methods have major implications for conclusions regarding glucose regulation, and health assessment in seals generally, which is important in species of conservation concern and in development of circulating

  5. Staple Food Self-Sufficiency of Farmers Household Level in The Great Solo

    NASA Astrophysics Data System (ADS)

    Darsono

    2017-04-01

    Analysis of food security level of household is a novelty of measurement standards which usually includes regional and national levels. With household approach is expected to provide the basis of sharp food policy formulation. The purpose of this study are to identify the condition of self-sufficiency in staple foods, and to find the main factors affecting the dynamics of self-sufficiency in staple foods on farm household level in Great Solo. Using primary data from 50 farmers in the sample and secondary data in Great Solo (Surakarta city, Boyolali, Sukoharjo, Karanganyar, Wonogiri, Sragen and Klaten). Compiled panel data were analyzed with linear probability regression models to produce a good model. The results showed that farm households in Great Solo has a surplus of staple food (rice) with an average consumption rate of 96.8 kg/capita/year. This number is lower than the national rate of 136.7 kg/capita/year. The main factors affecting the level of food self-sufficiency in the farmer household level are: rice production, rice consumption, land tenure, and number of family members. Key recommendations from this study are; improvement scale of the land cultivation for rice farming and non-rice diversification consumption.

  6. An Elegant Sufficiency: Load-Aware Differentiated Scheduling of Data Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kettimuthu, Rajkumar; Vardoyan, Gayane; Agrawal, Gagan

    2015-11-15

    We investigate the file transfer scheduling problem, where transfers among different endpoints must be scheduled to maximize pertinent metrics. We propose two new algorithms that exploit the fact that the aggregate bandwidth obtained over a network or at a storage system tends to increase with the number of concurrent transfers—but only up to a certain limit. The first algorithm, SEAL, uses runtime information and data-driven models to approximate system load and adapt transfer schedules and concurrency so as to maximize performance while avoiding saturation. We implement this algorithm using GridFTP as the transfer protocol and evaluate it using real transfermore » logs in a production WAN environment. Results show that SEAL can improve average slowdowns and turnaround times by up to 25% and worst-case slowdown and turnaround times by up to 50%, compared with the best-performing baseline scheme. Our second algorithm, STEAL, further leverages user-supplied categorization of transfers as either “interactive” (requiring immediate processing) or “batch” (less time-critical). Results show that STEAL reduces the average slowdown of interactive transfers by 63% compared to the best-performing baseline and by 21% compared to SEAL. For batch transfers, compared to the best-performing baseline, STEAL improves by 18% the utilization of the bandwidth unused by interactive transfers. By elegantly ensuring a sufficient, but not excessive, allocation of concurrency to the right transfers, we significantly improve overall performance despite constraints.« less

  7. Accurate Determination of the Frequency Response Function of Submerged and Confined Structures by Using PZT-Patches †

    PubMed Central

    Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias

    2017-01-01

    To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating

  8. Nicotine Activation of α4* Receptors: Sufficient for Reward, Tolerance, and Sensitization

    NASA Astrophysics Data System (ADS)

    Tapper, Andrew R.; McKinney, Sheri L.; Nashmi, Raad; Schwarz, Johannes; Deshpande, Purnima; Labarca, Cesar; Whiteaker, Paul; Marks, Michael J.; Collins, Allan C.; Lester, Henry A.

    2004-11-01

    The identity of nicotinic receptor subtypes sufficient to elicit both the acute and chronic effects of nicotine dependence is unknown. We engineered mutant mice with α4 nicotinic subunits containing a single point mutation, Leu9' --> Ala9' in the pore-forming M2 domain, rendering α4* receptors hypersensitive to nicotine. Selective activation of α4* nicotinic acetylcholine receptors with low doses of agonist recapitulates nicotine effects thought to be important in dependence, including reinforcement in response to acute nicotine administration, as well as tolerance and sensitization elicited by chronic nicotine administration. These data indicate that activation of α4* receptors is sufficient for nicotine-induced reward, tolerance, and sensitization.

  9. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    PubMed Central

    Moore, Michael J; Dhingra, Amit; Soltis, Pamela S; Shaw, Regina; Farmerie, William G; Folta, Kevin M; Soltis, Douglas E

    2006-01-01

    Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20) System (454 Life Sciences Corporation), to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae) and Platanus occidentalis (Platanaceae). Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy observed in the GS 20 plastid

  10. The Wagner-Nelson method can generate an accurate gastric emptying flow curve from CO2 data obtained by a 13C-labeled substrate breath test.

    PubMed

    Sanaka, Masaki; Yamamoto, Takatsugu; Ishii, Tarou; Kuyama, Yasushi

    2004-01-01

    In pharmacokinetics, the Wagner-Nelson (W-N) method can accurately estimate the rate of drug absorption from its urinary elimination rate. A stable isotope (13C) breath test attempts to estimate the rate of absorption of 13C, as an index of gastric emptying rate, from the rate of pulmonary elimination of 13CO2. The time-gastric emptying curve determined by the breath test is quite different from that determined by scintigraphy or ultrasonography. In this report, we have shown that the W-N method can adjust the difference. The W-N equation to estimate gastric emptying from breath data is as follows: the fractional cumulative amount of gastric contents emptied by time t = Abreath (t)/Abreath (infinity) + (1/0.65).d[Abreath (t)/Abreath (infinity) ]/dt, where Abreath (t) = the cumulative recovery of 13CO2 in breath by time t and Abreath ( infinity ) = the ultimate cumulative 13CO2 recovery. The emptying flow curve generated by ultrasonography was compared with that generated by the W-N method-adjusted breath test in 6 volunteers. The emptying curves by the W-N method were almost identical to those by ultrasound. The W-N method can generate an accurate emptying flow curve from 13CO2 data, and it can adjust the difference between ultrasonography and the breath test. Copyright 2004 S. Karger AG, Basel

  11. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    PubMed

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  12. Leadership, the Logic of Sufficiency and the Sustainability of Education

    ERIC Educational Resources Information Center

    Bottery, Mike

    2012-01-01

    The notion of sufficiency has not yet entered mainstream educational thinking, and it still has to make its mark upon educational leadership. However, a number of related concepts--particularly those of sustainability and complexity theory--are beginning to be noticed. This article examines these two concepts and uses them to critique the…

  13. Helping Low-Income Mothers with Criminal Records Achieve Self-Sufficiency.

    ERIC Educational Resources Information Center

    Brown, Rebecca

    2000-01-01

    This issue of WIN (Welfare Information Network) Issue Notes focuses on helping low-income mothers with criminal records achieve self-sufficiency. Section 1 offers background. Section 2 answers these policy questions: why states might want to focus on serving low-income mothers with criminal records; how states can encourage employers to hire…

  14. Resource Utilization and Site Selection for a Self-Sufficient Martian Outpost

    NASA Technical Reports Server (NTRS)

    Barker, Donald; Chamitoff, Gregory; James, George

    1998-01-01

    As a planet with striking similarities to Earth, Mars is an important focus for scientific research aimed at understanding the processes of planetary evolution and the formation of our solar system. Fortunately, Mars is also a planet with abundant natural resources, including assessible materials that can be used to support human life and to sustain a self-sufficient martian outpost. Resources required include water, breathable air, food, shelter, energy, and fuel. Through a mission design based on in situ resource development, we can establish a permanent outpost on Mars beginning with the first manned mission. This paper examines the potential for supporting the first manned mission with the objective of achieving self-sufficiency through well-understood resource development and a program of rigorous scientific research aimed at extending that capability. We examine the potential for initially extracting critical resources from the martian environment, and discuss the scientific investigations required to identify additional resources in the atmosphere, on the surface, and within the subsurface. We also discuss our current state of knowledge of Mars, technical considerations of resource utilization, and using unmanned missions' data for selecting an optimal site. The primary goal of achieving self-sufficiency on Mars would accelerate the development of human colonization beyond Earth, while providing a robust and permanent martian base from which humans can explore and conduct long-term research on planetary evolution, the solar system, and life itself.

  15. Accurate mode characterization of two-mode optical fibers by in-fiber acousto-optics.

    PubMed

    Alcusa-Sáez, E; Díez, A; Andrés, M V

    2016-03-07

    Acousto-optic interaction in optical fibers is exploited for the accurate and broadband characterization of two-mode optical fibers. Coupling between LP 01 and LP 1m modes is produced in a broadband wavelength range. Difference in effective indices, group indices, and chromatic dispersions between the guided modes, are obtained from experimental measurements. Additionally, we show that the technique is suitable to investigate the fine modes structure of LP modes, and some other intriguing features related with modes' cut-off.

  16. Simple and accurate sum rules for highly relativistic systems

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2005-03-01

    In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.

  17. Accurate HLA type inference using a weighted similarity graph.

    PubMed

    Xie, Minzhu; Li, Jing; Jiang, Tao

    2010-12-14

    The human leukocyte antigen system (HLA) contains many highly variable genes. HLA genes play an important role in the human immune system, and HLA gene matching is crucial for the success of human organ transplantations. Numerous studies have demonstrated that variation in HLA genes is associated with many autoimmune, inflammatory and infectious diseases. However, typing HLA genes by serology or PCR is time consuming and expensive, which limits large-scale studies involving HLA genes. Since it is much easier and cheaper to obtain single nucleotide polymorphism (SNP) genotype data, accurate computational algorithms to infer HLA gene types from SNP genotype data are in need. To infer HLA types from SNP genotypes, the first step is to infer SNP haplotypes from genotypes. However, for the same SNP genotype data set, the haplotype configurations inferred by different methods are usually inconsistent, and it is often difficult to decide which one is true. In this paper, we design an accurate HLA gene type inference algorithm by utilizing SNP genotype data from pedigrees, known HLA gene types of some individuals and the relationship between inferred SNP haplotypes and HLA gene types. Given a set of haplotypes inferred from the genotypes of a population consisting of many pedigrees, the algorithm first constructs a weighted similarity graph based on a new haplotype similarity measure and derives constraint edges from known HLA gene types. Based on the principle that different HLA gene alleles should have different background haplotypes, the algorithm searches for an optimal labeling of all the haplotypes with unknown HLA gene types such that the total weight among the same HLA gene types is maximized. To deal with ambiguous haplotype solutions, we use a genetic algorithm to select haplotype configurations that tend to maximize the same optimization criterion. Our experiments on a previously typed subset of the HapMap data show that the algorithm is highly accurate

  18. Revenue Sufficiency and Reliability in a Zero Marginal Cost Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.

    Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less

  19. Assessing sufficiency of thermal riverscapes for resilient salmon and steelhead populations

    EPA Science Inventory

    Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific location...

  20. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  1. STATEQ: a nonlinear least-squares code for obtaining Martin thermodynamic representations of fluids in the gaseous and dense gaseous regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milora, S. L.

    1976-02-01

    The use of the code NLIN (IBM Share Program No. 1428) to obtain empirical thermodynamic pressure-volume-temperature (P-V-T) relationships for substances in the gaseous and dense gaseous states is described. When sufficient experimental data exist, the code STATEQ will provide least-squares estimates for the 21 parameters of the Martin model. Another code, APPROX, is described which also obtains parameter estimates for the model by making use of the approximate generalized behavior of fluids. Use of the codes is illustrated in obtaining thermodynamic representations for isobutane. (auth)

  2. Trueness and precision of digital impressions obtained using an intraoral scanner with different head size in the partially edentulous mandible.

    PubMed

    Hayama, Hironari; Fueki, Kenji; Wadachi, Juro; Wakabayashi, Noriyuki

    2018-03-01

    It remains unclear whether digital impressions obtained using an intraoral scanner are sufficiently accurate for use in fabrication of removable partial dentures. We therefore compared the trueness and precision between conventional and digital impressions in the partially edentulous mandible. Mandibular Kennedy Class I and III models with soft silicone simulated-mucosa placed on the residual edentulous ridge were used. The reference models were converted to standard triangulated language (STL) file format using an extraoral scanner. Digital impressions were obtained using an intraoral scanner with a large or small scanning head, and converted to STL files. For conventional impressions, pressure impressions of the reference models were made and working casts fabricated using modified dental stone; these were converted to STL file format using an extraoral scanner. Conversion to STL file format was performed 5 times for each method. Trueness and precision were evaluated by deviation analysis using three-dimensional image processing software. Digital impressions had superior trueness (54-108μm), but inferior precision (100-121μm) compared to conventional impressions (trueness 122-157μm, precision 52-119μm). The larger intraoral scanning head showed better trueness and precision than the smaller head, and on average required fewer scanned images of digital impressions than the smaller head (p<0.05). On the color map, the deviation distribution tended to differ between the conventional and digital impressions. Digital impressions are partially comparable to conventional impressions in terms of accuracy; the use of a larger scanning head may improve the accuracy for removable partial denture fabrication. Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  3. 42 CFR 110.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Sufficient documentation for eligibility and benefits determinations. 110.72 Section 110.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES COUNTERMEASURES INJURY COMPENSATION PROGRAM Secretarial Determinations...

  4. 42 CFR 110.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Sufficient documentation for eligibility and benefits determinations. 110.72 Section 110.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES COUNTERMEASURES INJURY COMPENSATION PROGRAM Secretarial Determinations...

  5. 42 CFR 110.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Sufficient documentation for eligibility and benefits determinations. 110.72 Section 110.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES COUNTERMEASURES INJURY COMPENSATION PROGRAM Secretarial Determinations...

  6. 42 CFR 110.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Sufficient documentation for eligibility and benefits determinations. 110.72 Section 110.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES COUNTERMEASURES INJURY COMPENSATION PROGRAM Secretarial Determinations...

  7. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... In the Matter of Accurate NDE & Docket: 150-00017, General Inspection, LLC Broussard, Louisiana... an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28, 2011, the NRC and Accurate NDE...

  8. Differences in psychosocial responses to pain between sufficiently and insufficiently active adults with arthritis.

    PubMed

    Cary, Miranda A; Brittain, Danielle R; Gyurcsik, Nancy C

    2017-07-01

    Adults with arthritis struggle to meet the physical activity recommendation for disease self-management. Identifying psychosocial factors that differentiate adults who meet (sufficiently active) or do not meet (insufficiently active) the recommendation is needed. This study sought to examine differences in psychosocial responses to arthritis pain among adults who were sufficiently or insufficiently active. This prospective study included adults with medically diagnosed arthritis (N = 136, M age  = 49.75 ± 13.88 years) who completed two online surveys: (1) baseline: pain and psychosocial responses to pain and (2) two weeks later: physical activity. Psychosocial responses examined in this study were psychological flexibility in response to pain, pain anxiety and maladaptive responses to pain anxiety. A between-groups MANCOVA comparing sufficiently active (n = 87) to insufficiently active (n = 49) participants on psychosocial responses, after controlling for pain intensity, was significant (p = .005). Follow-up ANOVA's revealed that sufficiently active participants reported significantly higher psychological flexibility and used maladaptive responses less often compared to insufficiently active participants (p's < .05). These findings provide preliminary insight into the psychosocial profile of adults at risk for nonadherence due to their responses to arthritis pain.

  9. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  10. Accuracy of electron densities obtained via Koopmans-compliant hybrid functionals

    NASA Astrophysics Data System (ADS)

    Elmaslmane, A. R.; Wetherell, J.; Hodgson, M. J. P.; McKenna, K. P.; Godby, R. W.

    2018-04-01

    We evaluate the accuracy of electron densities and quasiparticle energy gaps given by hybrid functionals by directly comparing these to the exact quantities obtained from solving the many-electron Schrödinger equation. We determine the admixture of Hartree-Fock exchange to approximate exchange-correlation in our hybrid functional via one of several physically justified constraints, including the generalized Koopmans' theorem. We find that hybrid functionals yield strikingly accurate electron densities and gaps in both exchange-dominated and correlated systems. We also discuss the role of the screened Fock operator in the success of hybrid functionals.

  11. 42 CFR 102.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Sufficient documentation for eligibility and benefits determinations. 102.72 Section 102.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES SMALLPOX COMPENSATION PROGRAM Secretarial Determinations § 102.72...

  12. 42 CFR 102.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Sufficient documentation for eligibility and benefits determinations. 102.72 Section 102.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES SMALLPOX COMPENSATION PROGRAM Secretarial Determinations § 102.72...

  13. 42 CFR 102.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Sufficient documentation for eligibility and benefits determinations. 102.72 Section 102.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES SMALLPOX COMPENSATION PROGRAM Secretarial Determinations § 102.72...

  14. 42 CFR 102.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Sufficient documentation for eligibility and benefits determinations. 102.72 Section 102.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES SMALLPOX COMPENSATION PROGRAM Secretarial Determinations § 102.72...

  15. 42 CFR 102.72 - Sufficient documentation for eligibility and benefits determinations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Sufficient documentation for eligibility and benefits determinations. 102.72 Section 102.72 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES VACCINES SMALLPOX COMPENSATION PROGRAM Secretarial Determinations § 102.72...

  16. Cyclin D2 is sufficient to drive β cell self-renewal and regeneration.

    PubMed

    Tschen, Shuen-Ing; Zeng, Chun; Field, Loren; Dhawan, Sangeeta; Bhushan, Anil; Georgia, Senta

    2017-01-01

    Diabetes results from an inadequate mass of functional β cells, due to either β cell loss caused by autoimmune destruction (type I diabetes) or β cell failure in response to insulin resistance (type II diabetes). Elucidating the mechanisms that regulate β cell mass may be key to developing new techniques that foster β cell regeneration as a cellular therapy to treat diabetes. While previous studies concluded that cyclin D2 is required for postnatal β cell self-renewal in mice, it is not clear if cyclin D2 is sufficient to drive β cell self-renewal. Using transgenic mice that overexpress cyclin D2 specifically in β cells, we show that cyclin D2 overexpression increases β cell self-renewal post-weaning and results in increased β cell mass. β cells that overexpress cyclin D2 are responsive to glucose stimulation, suggesting they are functionally mature. β cells that overexpress cyclin D2 demonstrate an enhanced regenerative capacity after injury induced by streptozotocin toxicity. To understand if cyclin D2 overexpression is sufficient to drive β cell self-renewal, we generated a novel mouse model where cyclin D2 is only expressed in β cells of cyclin D2 -/- mice. Transgenic overexpression of cyclin D2 in cyclin D2 - / - β cells was sufficient to restore β cell mass, maintain normoglycaemia, and improve regenerative capacity when compared with cyclin D2 -/- littermates. Taken together, our results indicate that cyclin D2 is sufficient to regulate β cell self-renewal and that manipulation of its expression could be used to enhance β cell regeneration.

  17. The Utility of Maze Accurate Response Rate in Assessing Reading Comprehension in Upper Elementary and Middle School Students

    ERIC Educational Resources Information Center

    McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric

    2014-01-01

    This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…

  18. Use of an inertial navigation system for accurate track recovery and coastal oceanographic measurements

    NASA Technical Reports Server (NTRS)

    Oliver, B. M.; Gower, J. F. R.

    1977-01-01

    A data acquisition system using a Litton LTN-51 inertial navigation unit (INU) was tested and used for aircraft track recovery and for location and tracking from the air of targets at sea. The characteristic position drift of the INU is compensated for by sighting landmarks of accurately known position at discrete time intervals using a visual sighting system in the transparent nose of the Beechcraft 18 aircraft used. For an aircraft altitude of about 300 m, theoretical and experimental tests indicate that calculated aircraft and/or target positions obtained from the interpolated INU drift curve will be accurate to within 10 m for landmarks spaced approximately every 15 minutes in time. For applications in coastal oceanography, such as surface current mapping by tracking artificial targets, the system allows a broad area to be covered without use of high altitude photography and its attendant needs for large targets and clear weather.

  19. Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany

    This presentation provides an overview of new and ongoing NREL research that aims to improve our understanding of reliability and revenue sufficiency challenges through modeling tools within a markets framework.

  20. The use of the Dutch Self-Sufficiency Matrix (SSM-D) to inform allocation decisions to public mental health care for homeless people.

    PubMed

    Lauriks, Steve; de Wit, Matty A S; Buster, Marcel C A; Fassaert, Thijs J L; van Wifferen, Ron; Klazinga, Niek S

    2014-10-01

    The current study set out to develop a decision support tool based on the Self-Sufficiency Matrix (Dutch version; SSM-D) for the clinical decision to allocate homeless people to the public mental health care system at the central access point of public mental health care in Amsterdam, The Netherlands. Logistic regression and receiver operating characteristic-curve analyses were used to model professional decisions and establish four decision categories based on SSM-D scores from half of the research population (Total n = 612). The model and decision categories were found to be accurate and reliable in predicting professional decisions in the second half of the population. Results indicate that the decision support tool based on the SSM-D is useful and feasible. The method to develop the SSM-D as a decision support tool could be applied to decision-making processes in other systems and services where the SSM-D has been implemented, to further increase the utility of the instrument.

  1. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  2. Is there sufficient evidence for tuning fork tests in diagnosing fractures? A systematic review.

    PubMed

    Mugunthan, Kayalvili; Doust, Jenny; Kurz, Bodo; Glasziou, Paul

    2014-08-04

    To determine the diagnostic accuracy of tuning fork tests for detecting fractures. Systematic review of primary studies evaluating the diagnostic accuracy of tuning fork tests for the presence of fracture. We searched MEDLINE, CINAHL, AMED, EMBASE, Sports Discus, CAB Abstracts and Web of Science from commencement to November 2012. We manually searched the reference lists of any review papers and any identified relevant studies. Two reviewers independently reviewed the list of potentially eligible studies and rated the studies for quality using the QUADAS-2 tool. Data were extracted to form 2×2 contingency tables. The primary outcome measure was the accuracy of the test as measured by its sensitivity and specificity with 95% CIs. We included six studies (329 patients), with two types of tuning fork tests (pain induction and loss of sound transmission). The studies included patients with an age range 7-60 years. The prevalence of fracture ranged from 10% to 80%. The sensitivity of the tuning fork tests was high, ranging from 75% to 100%. The specificity of the tests was highly heterogeneous, ranging from 18% to 95%. Based on the studies in this review, tuning fork tests have some value in ruling out fractures, but are not sufficiently reliable or accurate for widespread clinical use. The small sample size of the studies and the observed heterogeneity make generalisable conclusion difficult. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  4. Programs To Create Economic Self-Sufficiency for Women in Public Housing.

    ERIC Educational Resources Information Center

    Smith, Cynthia; DeTardo-Bora, Kimberly; Durbin, Latrisha

    The Wheeling Housing Authority in Wheeling, West Virginia, conducted two residential programs to help women living in public housing develop economic self-sufficiency. The Learning Independence from Employment (LIFE) program was an intensive 3-week program designed to accomplish the following objectives: improve participants' communication skills…

  5. Sufficient oxygen for animal respiration 1,400 million years ago

    PubMed Central

    Zhang, Shuichang; Wang, Xiaomei; Wang, Huajian; Bjerrum, Christian J.; Hammarlund, Emma U.; Costa, M. Mafalda; Connelly, James N.; Zhang, Baomin; Su, Jin; Canfield, Donald E.

    2016-01-01

    The Mesoproterozoic Eon [1,600–1,000 million years ago (Ma)] is emerging as a key interval in Earth history, with a unique geochemical history that might have influenced the course of biological evolution on Earth. Indeed, although this time interval is rather poorly understood, recent chromium isotope results suggest that atmospheric oxygen levels were <0.1% of present levels, sufficiently low to have inhibited the evolution of animal life. In contrast, using a different approach, we explore the distribution and enrichments of redox-sensitive trace metals in the 1,400 Ma sediments of Unit 3 of the Xiamaling Formation, North China Block. Patterns of trace metal enrichments reveal oxygenated bottom waters during deposition of the sediments, and biomarker results demonstrate the presence of green sulfur bacteria in the water column. Thus, we document an ancient oxygen minimum zone. We develop a simple, yet comprehensive, model of marine carbon−oxygen cycle dynamics to show that our geochemical results are consistent with atmospheric oxygen levels >4% of present-day levels. Therefore, in contrast to previous suggestions, we show that there was sufficient oxygen to fuel animal respiration long before the evolution of animals themselves. PMID:26729865

  6. Quantum Markov chains, sufficiency of quantum channels, and Rényi information measures

    NASA Astrophysics Data System (ADS)

    Datta, Nilanjana; Wilde, Mark M.

    2015-12-01

    A short quantum Markov chain is a tripartite state {ρ }{ABC} such that system A can be recovered perfectly by acting on system C of the reduced state {ρ }{BC}. Such states have conditional mutual information I(A;B| C) equal to zero and are the only states with this property. A quantum channel {N} is sufficient for two states ρ and σ if there exists a recovery channel using which one can perfectly recover ρ from {N}(ρ ) and σ from {N}(σ ). The relative entropy difference D(ρ \\parallel σ )-D({N}(ρ )\\parallel {N}(σ )) is equal to zero if and only if {N} is sufficient for ρ and σ. In this paper, we show that these properties extend to Rényi generalizations of these information measures which were proposed in (Berta et al 2015 J. Math. Phys. 56 022205; Seshadreesan et al 2015 J. Phys. A: Math. Theor. 48 395303), thus providing an alternate characterization of short quantum Markov chains and sufficient quantum channels. These results give further support to these quantities as being legitimate Rényi generalizations of the conditional mutual information and the relative entropy difference. Along the way, we solve some open questions of Ruskai and Zhang, regarding the trace of particular matrices that arise in the study of monotonicity of relative entropy under quantum operations and strong subadditivity of the von Neumann entropy.

  7. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Accurate Acoustic Thermometry I: The Triple Point of Gallium

    NASA Astrophysics Data System (ADS)

    Moldover, M. R.; Trusler, J. P. M.

    1988-01-01

    The speed of sound in argon has been accurately measured in the pressure range 25-380 kPa at the temperature of the triple point of gallium (Tg) and at 340 kPa at the temperature of the triple point of water (Tt). The results are combined with previously published thermodynamic and transport property data to obtain Tg = (302.9169 +/- 0.0005) K on the thermodynamic scale. Among recent determinations of T68 (the temperature on IPTS-68) at the gallium triple point, those with the smallest measurement uncertainty fall in the range 302.923 71 to 302.923 98 K. We conclude that T-T68 = (-6.9 +/- 0.5) mK near 303 K, in agreement with results obtained from other primary thermometers. The speed of sound was measured with a spherical resonator. The volume and thermal expansion of the resonator were determined by weighing the mercury required to fill it at Tt and Tg. The largest part of the standard error in the present determination of Tg is systematic. It results from imperfect knowledge of the thermal expansion of mercury between Tt and Tg. Smaller parts of the error result from imperfections in the measurement of the temperature of the resonator and of the resonance frequencies.

  9. Post-Disaster Food and Nutrition from Urban Agriculture: A Self-Sufficiency Analysis of Nerima Ward, Tokyo.

    PubMed

    Sioen, Giles Bruno; Sekiyama, Makiko; Terada, Toru; Yokohari, Makoto

    2017-07-10

    Background : Post-earthquake studies from around the world have reported that survivors relying on emergency food for prolonged periods of time experienced several dietary related health problems. The present study aimed to quantify the potential nutrient production of urban agricultural vegetables and the resulting nutritional self-sufficiency throughout the year for mitigating post-disaster situations. Methods : We estimated the vegetable production of urban agriculture throughout the year. Two methods were developed to capture the production from professional and hobby farms: Method I utilized secondary governmental data on agricultural production from professional farms, and Method II was based on a supplementary spatial analysis to estimate the production from hobby farms. Next, the weight of produced vegetables [t] was converted into nutrients [kg]. Furthermore, the self-sufficiency by nutrient and time of year was estimated by incorporating the reference consumption of vegetables [kg], recommended dietary allowance of nutrients per capita [mg], and population statistics. The research was conducted in Nerima, the second most populous ward of Tokyo's 23 special wards. Self-sufficiency rates were calculated with the registered residents. Results : The estimated total vegetable production of 5660 tons was equivalent to a weight-based self-sufficiency rate of 6.18%. The average nutritional self-sufficiencies of Methods I and II were 2.48% and 0.38%, respectively, resulting in an aggregated average of 2.86%. Fluctuations throughout the year were observed according to the harvest seasons of the available crops. Vitamin K (6.15%) had the highest self-sufficiency of selected nutrients, while calcium had the lowest (0.96%). Conclusions : This study suggests that depending on the time of year, urban agriculture has the potential to contribute nutrients to diets during post-disaster situations as disaster preparedness food. Emergency responses should be targeted

  10. Post-Disaster Food and Nutrition from Urban Agriculture: A Self-Sufficiency Analysis of Nerima Ward, Tokyo

    PubMed Central

    Sekiyama, Makiko; Terada, Toru; Yokohari, Makoto

    2017-01-01

    Background: Post-earthquake studies from around the world have reported that survivors relying on emergency food for prolonged periods of time experienced several dietary related health problems. The present study aimed to quantify the potential nutrient production of urban agricultural vegetables and the resulting nutritional self-sufficiency throughout the year for mitigating post-disaster situations. Methods: We estimated the vegetable production of urban agriculture throughout the year. Two methods were developed to capture the production from professional and hobby farms: Method I utilized secondary governmental data on agricultural production from professional farms, and Method II was based on a supplementary spatial analysis to estimate the production from hobby farms. Next, the weight of produced vegetables [t] was converted into nutrients [kg]. Furthermore, the self-sufficiency by nutrient and time of year was estimated by incorporating the reference consumption of vegetables [kg], recommended dietary allowance of nutrients per capita [mg], and population statistics. The research was conducted in Nerima, the second most populous ward of Tokyo’s 23 special wards. Self-sufficiency rates were calculated with the registered residents. Results: The estimated total vegetable production of 5660 tons was equivalent to a weight-based self-sufficiency rate of 6.18%. The average nutritional self-sufficiencies of Methods I and II were 2.48% and 0.38%, respectively, resulting in an aggregated average of 2.86%. Fluctuations throughout the year were observed according to the harvest seasons of the available crops. Vitamin K (6.15%) had the highest self-sufficiency of selected nutrients, while calcium had the lowest (0.96%). Conclusions: This study suggests that depending on the time of year, urban agriculture has the potential to contribute nutrients to diets during post-disaster situations as disaster preparedness food. Emergency responses should be targeted according

  11. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation

    NASA Astrophysics Data System (ADS)

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.

    2015-02-01

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  12. Cyclin D2 is sufficient to drive β cell self-renewal and regeneration

    PubMed Central

    2017-01-01

    ABSTRACT Diabetes results from an inadequate mass of functional β cells, due to either β cell loss caused by autoimmune destruction (type I diabetes) or β cell failure in response to insulin resistance (type II diabetes). Elucidating the mechanisms that regulate β cell mass may be key to developing new techniques that foster β cell regeneration as a cellular therapy to treat diabetes. While previous studies concluded that cyclin D2 is required for postnatal β cell self-renewal in mice, it is not clear if cyclin D2 is sufficient to drive β cell self-renewal. Using transgenic mice that overexpress cyclin D2 specifically in β cells, we show that cyclin D2 overexpression increases β cell self-renewal post-weaning and results in increased β cell mass. β cells that overexpress cyclin D2 are responsive to glucose stimulation, suggesting they are functionally mature. β cells that overexpress cyclin D2 demonstrate an enhanced regenerative capacity after injury induced by streptozotocin toxicity. To understand if cyclin D2 overexpression is sufficient to drive β cell self-renewal, we generated a novel mouse model where cyclin D2 is only expressed in β cells of cyclin D2−/− mice. Transgenic overexpression of cyclin D2 in cyclin D2−/− β cells was sufficient to restore β cell mass, maintain normoglycaemia, and improve regenerative capacity when compared with cyclin D2−/− littermates. Taken together, our results indicate that cyclin D2 is sufficient to regulate β cell self-renewal and that manipulation of its expression could be used to enhance β cell regeneration. PMID:28763258

  13. Preparation of a PM2.5-like reference material in sufficient quantities for accurate monitoring of anions and cations in fine atmospheric dust.

    PubMed

    Charoud-Got, Jean; Emma, Giovanni; Seghers, John; Tumba-Tshilumba, Marie-France; Santoro, Anna; Held, Andrea; Snell, James; Emteborg, Håkan

    2017-12-01

    A reference material of a PM 2.5 -like atmospheric dust material has been prepared using a newly developed method. It is intended to certify values for the mass fraction of SO 4 2- , NO 3 - , Cl - (anions) and Na + , K + , NH 4 + , Ca 2+ , Mg 2+ (cations) in this material. A successful route for the preparation of the candidate reference material is described alongside with two alternative approaches that were abandoned. First, a PM 10 -like suspension was allowed to stand for 72 h. Next, 90% of the volume was siphoned off. The suspension was spiked with appropriate levels of the desired ions just prior to drop-wise shock-freezing in liquid nitrogen. Finally, freeze drying of the resulting ice kernels took place. In using this approach, it was possible to produce about 500 g of PM 2.5 -like material with appropriate characteristics. Fine dust in 150-mg portions was filled into vials under an inert atmosphere. The final candidate material approaches the EN12341 standard of a PM 2.5 -material containing the ions mentioned in Directive 2008/50/EC of the European Union. The material should be analysed using the CEN/TR 16269:2011 method for anions and cations in PM 2.5 collected on filters. The method described here is a relatively rapid means to obtain large quantities of PM 2.5 . With access to smaller freeze dryers, still 5 to 10 g per freeze-drying cycle can be obtained. Access to such quantities of PM 2.5 -like material could potentially be used for different kinds of experiments when performing research in this field. Graphical abstract The novelty of the method lies in transformation of a suspension with fine particulate matter to a homogeneous and stable powder with characteristics similar to air-sampled PM 2,5 . The high material yield in a relatively short time is a distinct advantage in comparison with collection of air-sampled PM 2,5 .

  14. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  15. Evaluation of marginal and internal gaps of metal ceramic crowns obtained from conventional impressions and casting techniques with those obtained from digital techniques.

    PubMed

    Rai, Rathika; Kumar, S Arun; Prabhu, R; Govindan, Ranjani Thillai; Tanveer, Faiz Mohamed

    2017-01-01

    Accuracy in fit of cast metal restoration has always remained as one of the primary factors in determining the success of the restoration. A well-fitting restoration needs to be accurate both along its margin and with regard to its internal surface. The aim of the study is to evaluate the marginal fit of metal ceramic crowns obtained by conventional inlay casting wax pattern using conventional impression with the metal ceramic crowns obtained by computer-aided design and computer-aided manufacturing (CAD/CAM) technique using direct and indirect optical scanning. This in vitro study on preformed custom-made stainless steel models with former assembly that resembles prepared tooth surfaces of standardized dimensions comprised three groups: the first group included ten samples of metal ceramic crowns fabricated with conventional technique, the second group included CAD/CAM-milled direct metal laser sintering (DMLS) crowns using indirect scanning, and the third group included DMLS crowns fabricated by direct scanning of the stainless steel model. The vertical marginal gap and the internal gap were evaluated with the stereomicroscope (Zoomstar 4); post hoc Turkey's test was used for statistical analysis. One-way analysis of variance method was used to compare the mean values. Metal ceramic crowns obtained from direct optical scanning showed the least marginal and internal gap when compared to the castings obtained from inlay casting wax and indirect optical scanning. Indirect and direct optical scanning had yielded results within clinically acceptable range.

  16. Accurate age determinations of several nearby open clusters containing magnetic Ap stars

    NASA Astrophysics Data System (ADS)

    Silaj, J.; Landstreet, J. D.

    2014-06-01

    Context. To study the time evolution of magnetic fields, chemical abundance peculiarities, and other characteristics of magnetic Ap and Bp stars during their main sequence lives, a sample of these stars in open clusters has been obtained, as such stars can be assumed to have the same ages as the clusters to which they belong. However, in exploring age determinations in the literature, we find a large dispersion among different age determinations, even for bright, nearby clusters. Aims: Our aim is to obtain ages that are as accurate as possible for the seven nearby open clusters α Per, Coma Ber, IC 2602, NGC 2232, NGC 2451A, NGC 2516, and NGC 6475, each of which contains at least one magnetic Ap or Bp star. Simultaneously, we test the current calibrations of Te and luminosity for the Ap/Bp star members, and identify clearly blue stragglers in the clusters studied. Methods: We explore the possibility that isochrone fitting in the theoretical Hertzsprung-Russell diagram (i.e. log (L/L⊙) vs. log Te), rather than in the conventional colour-magnitude diagram, can provide more precise and accurate cluster ages, with well-defined uncertainties. Results: Well-defined ages are found for all the clusters studied. For the nearby clusters studied, the derived ages are not very sensitive to the small uncertainties in distance, reddening, membership, metallicity, or choice of isochrones. Our age determinations are all within the range of previously determined values, but the associated uncertainties are considerably smaller than the spread in recent age determinations from the literature. Furthermore, examination of proper motions and HR diagrams confirms that the Ap stars identified in these clusters are members, and that the presently accepted temperature scale and bolometric corrections for Ap stars are approximately correct. We show that in these theoretical HR diagrams blue stragglers are particularly easy to identify. Conclusions: Constructing the theoretical HR diagram

  17. Accurate, noninvasive continuous monitoring of cardiac output by whole-body electrical bioimpedance.

    PubMed

    Cotter, Gad; Moshkovitz, Yaron; Kaluski, Edo; Cohen, Amram J; Miller, Hilton; Goor, Daniel; Vered, Zvi

    2004-04-01

    Cardiac output (CO) is measured but sparingly due to limitations in its measurement technique (ie, right-heart catheterization). Yet, in recent years it has been suggested that CO may be of value in the diagnosis, risk stratification, and treatment titration of cardiac patients, especially those with congestive heart failure (CHF). We examine the use of a new noninvasive, continuous whole-body bioimpedance system (NICaS; NI Medical; Hod-Hasharon, Israel) for measuring CO. The aim of the present study was to test the validity of this noninvasive cardiac output system/monitor (NICO) in a cohort of cardiac patients. Prospective, double-blind comparison of the NICO and thermodilution CO determinations. We enrolled 122 patients in three different groups: during cardiac catheterization (n = 40); before, during, and after coronary bypass surgery (n = 51); and while being treated for acute congestive heart failure (CHF) exacerbation (n = 31). MEASUREMENTS AND INTERVENTION: In all patients, CO measurements were obtained by two independent blinded operators. CO was measured by both techniques three times, and an average was determined for each time point. CO was measured at one time point in patients undergoing coronary catheterization; before, during, and after bypass surgery in patients undergoing coronary bypass surgery; and before and during vasodilator treatment in patients treated for acute heart failure. Overall, 418 paired CO measurements were obtained. The overall correlation between the NICO cardiac index (CI) and the thermodilution CI was r = 0.886, with a small bias (0.0009 +/- 0.684 L) [mean +/- 2 SD], and this finding was consistent within each group of patients. Thermodilution readings were 15% higher than NICO when CI was < 1.5 L/min/m(2), and 5% lower than NICO when CI was > 3 L/min/m(2). The NICO has also accurately detected CI changes during coronary bypass operation and vasodilator administration for acute CHF. The results of the present study indicate

  18. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  19. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-07-01

    Routine measurements of the beam irradiance at normal incidence (DNI) include the irradiance originating from within the extent of the solar disc only (DNIS) whose angular extent is 0.266° ± 1.7 %, and that from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates if the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and a collocated Sun and Aureole Measurement (SAM) instrument which offers reference measurements of the monochromatic profile of solar radiance, were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 5 %, a relative bias of +1 % and acoefficient of determination greater than 0.97. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a Two Term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 22 and -19 % and a coefficient of determination of 0.89. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard DNI measurements.

  20. System to measure accurate temperature dependence of electric conductivity down to 20 K in ultrahigh vacuum.

    PubMed

    Sakai, C; Takeda, S N; Daimon, H

    2013-07-01

    We have developed the new in situ electrical-conductivity measurement system which can be operated in ultrahigh vacuum (UHV) with accurate temperature measurement down to 20 K. This system is mainly composed of a new sample-holder fixing mechanism, a new movable conductivity-measurement mechanism, a cryostat, and two receptors for sample- and four-probe holders. Sample-holder is pushed strongly against the receptor, which is connected to a cryostat, by using this new sample-holder fixing mechanism to obtain high thermal conductivity. Test pieces on the sample-holders have been cooled down to about 20 K using this fixing mechanism, although they were cooled down to only about 60 K without this mechanism. Four probes are able to be touched to a sample surface using this new movable conductivity-measurement mechanism for measuring electrical conductivity after making film on substrates or obtaining clean surfaces by cleavage, flashing, and so on. Accurate temperature measurement is possible since the sample can be transferred with a thermocouple and∕or diode being attached directly to the sample. A single crystal of Bi-based copper oxide high-Tc superconductor (HTSC) was cleaved in UHV to obtain clean surface, and its superconducting critical temperature has been successfully measured in situ. The importance of in situ measurement of resistance in UHV was demonstrated for this HTSC before and after cesium (Cs) adsorption on its surface. The Tc onset increase and the Tc offset decrease by Cs adsorption were observed.

  1. Is self-reported height or arm span a more accurate alternative measure of height?

    PubMed

    Brown, Jean K; Feng, Jui-Ying; Knapp, Thomas R

    2002-11-01

    The purpose of this study was to determine whether self-reported height or arm span is the more accurate alternative measure of height. A sample of 409 people between the ages of 19 and 67 (M = 35.0) participated in this anthropometric study. Height, self-reported height, and arm span were measured by 82 nursing research students. Mean differences from criterion measures were 0.17 cm for the measuring rules, 0.47 cm for arm span, and 0.85 cm and 0.87 cm for heights. Test-retest reliability was r = .997 for both height and arm span. The relationships of height to self-reported height and arm span were r = .97 and .90, respectively. Mean absolute differences were 1.80 cm and 4.29 cm, respectively. These findings support the practice of using self-reported height as an alternative measure of measured height in clinical settings, but arm span is an accurate alternative when neither measured height nor self-reported height is obtainable.

  2. Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?

    PubMed

    Mahdiani, Shadi; Jeyhani, Vala; Peltokangas, Mikko; Vehkaoja, Antti

    2015-01-01

    With the worldwide growth of mobile wireless technologies, healthcare services can be provided at anytime and anywhere. Usage of wearable wireless physiological monitoring system has been extensively increasing during the last decade. These mobile devices can continuously measure e.g. the heart activity and wirelessly transfer the data to the mobile phone of the patient. One of the significant restrictions for these devices is usage of energy, which leads to requiring low sampling rate. This article is presented in order to investigate the lowest adequate sampling frequency of ECG signal, for achieving accurate enough time domain heart rate variability (HRV) parameters. For this purpose the ECG signals originally measured with high 5 kHz sampling rate were down-sampled to simulate the measurement with lower sampling rate. Down-sampling loses information, decreases temporal accuracy, which was then restored by interpolating the signals to their original sampling rates. The HRV parameters obtained from the ECG signals with lower sampling rates were compared. The results represent that even when the sampling rate of ECG signal is equal to 50 Hz, the HRV parameters are almost accurate with a reasonable error.

  3. Masses of the components of SB2 binaries observed with Gaia - IV. Accurate SB2 orbits for 14 binaries and masses of three binaries*

    NASA Astrophysics Data System (ADS)

    Kiefer, F.; Halbwachs, J.-L.; Lebreton, Y.; Soubiran, C.; Arenou, F.; Pourbaix, D.; Famaey, B.; Guillout, P.; Ibata, R.; Mazeh, T.

    2018-02-01

    The orbital motion of non-contact double-lined spectroscopic binaries (SB2s), with periods of a few tens of days to several years, holds unique, accurate information on individual stellar masses, which only long-term monitoring can unlock. The combination of radial velocity measurements from high-resolution spectrographs and astrometric measurements from high-precision interferometers allows the derivation of SB2 component masses down to the percent precision. Since 2010, we have observed a large sample of SB2s with the SOPHIE spectrograph at the Observatoire de Haute-Provence, aiming at the derivation of orbital elements with sufficient accuracy to obtain masses of components with relative errors as low as 1 per cent when the astrometric measurements of the Gaia satellite are taken into account. In this paper, we present the results from 6 yr of observations of 14 SB2 systems with periods ranging from 33 to 4185 days. Using the TODMOR algorithm, we computed radial velocities from the spectra and then derived the orbital elements of these binary systems. The minimum masses of the 28 stellar components are then obtained with an average sample accuracy of 1.0 ± 0.2 per cent. Combining the radial velocities with existing interferometric measurements, we derived the masses of the primary and secondary components of HIP 61100, HIP 95995 and HIP 101382 with relative errors for components (A,B) of, respectively, (2.0, 1.7) per cent, (3.7, 3.7) per cent and (0.2, 0.1) per cent. Using the CESAM2K stellar evolution code, we constrained the initial He abundance, age and metallicity for HIP 61100 and HIP 95995.

  4. Quantitative LC-MS of polymers: determining accurate molecular weight distributions by combined size exclusion chromatography and electrospray mass spectrometry with maximum entropy data processing.

    PubMed

    Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher

    2008-09-15

    We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.

  5. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    -order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.

  6. An Accurate Method for Measuring Airplane-Borne Conformal Antenna's Radar Cross Section

    NASA Astrophysics Data System (ADS)

    Guo, Shuxia; Zhang, Lei; Wang, Yafeng; Hu, Chufeng

    2016-09-01

    The airplane-borne conformal antenna attaches itself tightly with the airplane skin, so the conventional measurement method cannot determine the contribution of the airplane-borne conformal antenna to its radar cross section (RCS). This paper uses the 2D microwave imaging to isolate and extract the distribution of the reflectivity of the airplane-borne conformal antenna. It obtains the 2D spatial spectra of the conformal antenna through the wave spectral transform between the 2D spatial image and the 2D spatial spectrum. After the interpolation from the rectangular coordinate domain to the polar coordinate domain, the spectral domain data for the variation of the scatter of the conformal antenna with frequency and angle is obtained. The experimental results show that the measurement method proposed in this paper greatly enhances the airplane-borne conformal antenna's RCS measurement accuracy, essentially eliminates the influences caused by the airplane skin and more accurately reveals the airplane-borne conformal antenna's RCS scatter properties.

  7. New reference values for thyroid volume by ultrasound in iodine-sufficient schoolchildren: a World Health Organization/Nutrition for Health and Development Iodine Deficiency Study Group Report.

    PubMed

    Zimmermann, Michael B; Hess, Sonja Y; Molinari, Luciano; De Benoist, Bruno; Delange, François; Braverman, Lewis E; Fujieda, Kenji; Ito, Yoshiya; Jooste, Pieter L; Moosa, Khairya; Pearce, Elizabeth N; Pretell, Eduardo A; Shishiba, Yoshimasa

    2004-02-01

    Goiter prevalence in school-age children is an indicator of the severity of iodine deficiency disorders (IDDs) in a population. In areas of mild-to-moderate IDDs, measurement of thyroid volume (Tvol) by ultrasound is preferable to palpation for grading goiter, but interpretation requires reference criteria from iodine-sufficient children. The study aim was to establish international reference values for Tvol by ultrasound in 6-12-y-old children that could be used to define goiter in the context of IDD monitoring. Tvol was measured by ultrasound in 6-12-y-old children living in areas of long-term iodine sufficiency in North and South America, central Europe, the eastern Mediterranean, Africa, and the western Pacific. Measurements were made by 2 experienced examiners using validated techniques. Data were log transformed, used to calculate percentiles on the basis of the Gaussian distribution, and then transformed back to the linear scale. Age- and body surface area (BSA)-specific 97th percentiles for Tvol were calculated for boys and girls. The sample included 3529 children evenly divided between boys and girls at each year ( +/- SD age: 9.3 +/- 1.9 y). The range of median urinary iodine concentrations for the 6 study sites was 118-288 micro g/L. There were significant differences in age- and BSA-adjusted mean Tvols between sites, which suggests that population-specific references in countries with long-standing iodine sufficiency may be more accurate than is a single international reference. However, overall differences in age- and BSA-adjusted Tvols between sites were modest relative to the population and measurement variability, which supports the use of a single, site-independent set of references. These new international reference values for Tvol by ultrasound can be used for goiter screening in the context of IDD monitoring.

  8. Fast and accurate automated cell boundary determination for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  9. VHL deletion impairs mammary alveologenesis but is not sufficient for mammary tumorigenesis.

    PubMed

    Seagroves, Tiffany N; Peacock, Danielle L; Liao, Debbie; Schwab, Luciana P; Krueger, Robin; Handorf, Charles R; Haase, Volker H; Johnson, Randall S

    2010-05-01

    Overexpression of hypoxia inducible factor-1 (HIF-1)alpha, which is common in most solid tumors, correlates with poor prognosis and high metastatic risk in breast cancer patients. Because HIF-1alpha protein stability is tightly controlled by the tumor suppressor von Hippel-Lindau (VHL), deletion of VHL results in constitutive HIF-1alpha expression. To determine whether VHL plays a role in normal mammary gland development, and if HIF-1alpha overexpression is sufficient to initiate breast cancer, Vhl was conditionally deleted in the mammary epithelium using the Cre/loxP system. During first pregnancy, loss of Vhl resulted in decreased mammary epithelial cell proliferation and impaired alveolar differentiation; despite these phenotypes, lactation was sufficient to support pup growth. In contrast, in multiparous dams, Vhl(-/-) mammary glands exhibited a progressive loss of alveolar epithelium, culminating in lactation failure. Deletion of Vhl in the epithelium also impacted the mammary stroma, as there was increased microvessel density accompanied by hemorrhage and increased immune cell infiltration. However, deletion of Vhl was not sufficient to induce mammary tumorigenesis in dams bred continuously for up to 24 months of age. Moreover, co-deletion of Hif1a could not rescue the Vhl(-/-)-dependent phenotype as dams were unable to successfully lactate during the first lactation. These results suggest that additional VHL-regulated genes besides HIF1A function to maintain the proliferative and regenerative potential of the breast epithelium.

  10. Investigating the accuracy of microstereotactic-body-radiotherapy utilizing anatomically accurate 3D printed rodent-morphic dosimeters.

    PubMed

    Bache, Steven T; Juang, Titania; Belley, Matthew D; Koontz, Bridget F; Adamovics, John; Yoshizumi, Terry T; Kirsch, David G; Oldham, Mark

    2015-02-01

    Sophisticated small animal irradiators, incorporating cone-beam-CT image-guidance, have recently been developed which enable exploration of the efficacy of advanced radiation treatments in the preclinical setting. Microstereotactic-body-radiation-therapy (microSBRT) is one technique of interest, utilizing field sizes in the range of 1-15 mm. Verification of the accuracy of microSBRT treatment delivery is challenging due to the lack of available methods to comprehensively measure dose distributions in representative phantoms with sufficiently high spatial resolution and in 3 dimensions (3D). This work introduces a potential solution in the form of anatomically accurate rodent-morphic 3D dosimeters compatible with ultrahigh resolution (0.3 mm(3)) optical computed tomography (optical-CT) dose read-out. Rodent-morphic dosimeters were produced by 3D-printing molds of rodent anatomy directly from contours defined on x-ray CT data sets of rats and mice, and using these molds to create tissue-equivalent radiochromic 3D dosimeters from Presage. Anatomically accurate spines were incorporated into some dosimeters, by first 3D printing the spine mold, then forming a high-Z bone equivalent spine insert. This spine insert was then set inside the tissue equivalent body mold. The high-Z spinal insert enabled representative cone-beam CT IGRT targeting. On irradiation, a linear radiochromic change in optical-density occurs in the dosimeter, which is proportional to absorbed dose, and was read out using optical-CT in high-resolution (0.5 mm isotropic voxels). Optical-CT data were converted to absolute dose in two ways: (i) using a calibration curve derived from other Presage dosimeters from the same batch, and (ii) by independent measurement of calibrated dose at a point using a novel detector comprised of a yttrium oxide based nanocrystalline scintillator, with a submillimeter active length. A microSBRT spinal treatment was delivered consisting of a 180° continuous arc at 225 k

  11. Investigating the accuracy of microstereotactic-body-radiotherapy utilizing anatomically accurate 3D printed rodent-morphic dosimeters

    PubMed Central

    Bache, Steven T.; Juang, Titania; Belley, Matthew D.; Koontz, Bridget F.; Adamovics, John; Yoshizumi, Terry T.; Kirsch, David G.; Oldham, Mark

    2015-01-01

    Purpose: Sophisticated small animal irradiators, incorporating cone-beam-CT image-guidance, have recently been developed which enable exploration of the efficacy of advanced radiation treatments in the preclinical setting. Microstereotactic-body-radiation-therapy (microSBRT) is one technique of interest, utilizing field sizes in the range of 1–15 mm. Verification of the accuracy of microSBRT treatment delivery is challenging due to the lack of available methods to comprehensively measure dose distributions in representative phantoms with sufficiently high spatial resolution and in 3 dimensions (3D). This work introduces a potential solution in the form of anatomically accurate rodent-morphic 3D dosimeters compatible with ultrahigh resolution (0.3 mm3) optical computed tomography (optical-CT) dose read-out. Methods: Rodent-morphic dosimeters were produced by 3D-printing molds of rodent anatomy directly from contours defined on x-ray CT data sets of rats and mice, and using these molds to create tissue-equivalent radiochromic 3D dosimeters from Presage. Anatomically accurate spines were incorporated into some dosimeters, by first 3D printing the spine mold, then forming a high-Z bone equivalent spine insert. This spine insert was then set inside the tissue equivalent body mold. The high-Z spinal insert enabled representative cone-beam CT IGRT targeting. On irradiation, a linear radiochromic change in optical-density occurs in the dosimeter, which is proportional to absorbed dose, and was read out using optical-CT in high-resolution (0.5 mm isotropic voxels). Optical-CT data were converted to absolute dose in two ways: (i) using a calibration curve derived from other Presage dosimeters from the same batch, and (ii) by independent measurement of calibrated dose at a point using a novel detector comprised of a yttrium oxide based nanocrystalline scintillator, with a submillimeter active length. A microSBRT spinal treatment was delivered consisting of a 180

  12. Investigating the accuracy of microstereotactic-body-radiotherapy utilizing anatomically accurate 3D printed rodent-morphic dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, Steven T.; Juang, Titania; Belley, Matthew D.

    Purpose: Sophisticated small animal irradiators, incorporating cone-beam-CT image-guidance, have recently been developed which enable exploration of the efficacy of advanced radiation treatments in the preclinical setting. Microstereotactic-body-radiation-therapy (microSBRT) is one technique of interest, utilizing field sizes in the range of 1–15 mm. Verification of the accuracy of microSBRT treatment delivery is challenging due to the lack of available methods to comprehensively measure dose distributions in representative phantoms with sufficiently high spatial resolution and in 3 dimensions (3D). This work introduces a potential solution in the form of anatomically accurate rodent-morphic 3D dosimeters compatible with ultrahigh resolution (0.3 mm{sup 3}) opticalmore » computed tomography (optical-CT) dose read-out. Methods: Rodent-morphic dosimeters were produced by 3D-printing molds of rodent anatomy directly from contours defined on x-ray CT data sets of rats and mice, and using these molds to create tissue-equivalent radiochromic 3D dosimeters from Presage. Anatomically accurate spines were incorporated into some dosimeters, by first 3D printing the spine mold, then forming a high-Z bone equivalent spine insert. This spine insert was then set inside the tissue equivalent body mold. The high-Z spinal insert enabled representative cone-beam CT IGRT targeting. On irradiation, a linear radiochromic change in optical-density occurs in the dosimeter, which is proportional to absorbed dose, and was read out using optical-CT in high-resolution (0.5 mm isotropic voxels). Optical-CT data were converted to absolute dose in two ways: (i) using a calibration curve derived from other Presage dosimeters from the same batch, and (ii) by independent measurement of calibrated dose at a point using a novel detector comprised of a yttrium oxide based nanocrystalline scintillator, with a submillimeter active length. A microSBRT spinal treatment was delivered consisting of a

  13. Conversion of calibration curves for accurate estimation of molecular weight averages and distributions of polyether polyols by conventional size exclusion chromatography.

    PubMed

    Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M

    2018-08-17

    Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.

  14. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  15. Annual Report to the Nation on the Status of Cancer, 1975–2008, Featuring Cancers Associated With Excess Weight and Lack of Sufficient Physical Activity

    PubMed Central

    Eheman, Christie; Henley, S. Jane; Ballard-Barbash, Rachel; Jacobs, Eric J.; Schymura, Maria J.; Noone, Anne-Michelle; Pan, Liping; Anderson, Robert N.; Fulton, Janet E.; Kohler, Betsy A.; Jemal, Ahmedin; Ward, Elizabeth; Plescia, Marcus; Ries, Lynn A. G.; Edwards, Brenda K.

    2015-01-01

    BACKGROUND Annual updates on cancer occurrence and trends in the United States are provided through collaboration between the American Cancer Society (ACS), the Centers for Disease Control and Prevention (CDC), the National Cancer Institute (NCI), and the North American Association of Central Cancer Registries (NAACCR). This year’s report highlights the increased cancer risk associated with excess weight (overweight or obesity) and lack of sufficient physical activity (<150 minutes of physical activity per week). METHODS Data on cancer incidence were obtained from the CDC, NCI, and NAACCR; data on cancer deaths were obtained from the CDC’s National Center for Health Statistics. Annual percent changes in incidence and death rates (age-standardized to the 2000 US population) for all cancers combined and for the leading cancers among men and among women were estimated by joinpoint analysis of long-term trends (incidence for 1992–2008 and mortality for 1975–2008) and short-term trends (1999–2008). Information was obtained from national surveys about the proportion of US children, adolescents, and adults who are overweight, obese, insufficiently physically active, or physically inactive. RESULTS Death rates from all cancers combined decreased from 1999 to 2008, continuing a decline that began in the early 1990s, among men and among women in most racial and ethnic groups. Death rates decreased from 1999 to 2008 for most cancer sites, including the 4 most common cancers (lung, colorectum, breast, and prostate). The incidence of prostate and colorectal cancers also decreased from 1999 to 2008. Lung cancer incidence declined from 1999 to 2008 among men and from 2004 to 2008 among women. Breast cancer incidence decreased from 1999 to 2004 but was stable from 2004 to 2008. Incidence increased for several cancers, including pancreas, kidney, and adenocarcinoma of the esophagus, which are associated with excess weight. CONCLUSIONS Although improvements are reported in

  16. Sonic hedgehog in the notochord is sufficient for patterning of the intervertebral discs

    PubMed Central

    Choi, Kyung-Suk; Lee, Chanmi; Harfe, Brian D.

    2012-01-01

    The intervertebral discs, located between adjacent vertebrae, are required for stability of the spine and distributing mechanical load throughout the vertebral column. All cell types located in thes middle regions of the discs, called nuclei pulposi, are derived from the embryonic notochord. Recently, it was shown that the hedgehog signaling pathway plays an essential role during formation of nuclei pulposi. However, during the time that nuclei pulposi are forming, Shh is expressed in both the notochord and the nearby floor plate. To determine the source of SHH protein sufficient for formation of nuclei pulposi we removed Shh from either the floor plate or the notochord using tamoxifen-inducible Cre alleles. Removal of Shh from the floor plate resulted in phenotypically normal intervertebral discs, indicating that Shh expression in this tissue is not required for disc patterning. In addition, embryos that lacked Shh in the floor plate had normal vertebral columns, demonstrating that Shh expression in the notochord is sufficient for pattering the entire vertebral column. Removal of Shh from the notochord resulted in the absence of Shh in the floor plate, loss of intervertebral discs and vertebral structures. These data indicate that Shh expression in the notochord is sufficient for patterning of the intervertebral discs and the vertebral column. PMID:22841806

  17. Revenue Sufficiency and Reliability in a Zero Marginal Cost Future: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.; Milligan, Michael; Brinkman, Greg

    Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less

  18. A generalized operational formula based on total electronic densities to obtain 3D pictures of the dual descriptor to reveal nucleophilic and electrophilic sites accurately on closed-shell molecules.

    PubMed

    Martínez-Araya, Jorge I

    2016-09-30

    By means of the conceptual density functional theory, the so-called dual descriptor (DD) has been adapted to be used in any closed-shell molecule that presents degeneracy in its frontier molecular orbitals. The latter is of paramount importance because a correct description of local reactivity will allow to predict the most favorable sites on a molecule to undergo nucleophilic or electrophilic attacks; on the contrary, an incomplete description of local reactivity might have serio us consequences, particularly for those experimental chemists that have the need of getting an insight about reactivity of chemical reagents before using them in synthesis to obtain a new compound. In the present work, the old approach based only on electronic densities of frontier molecular orbitals is replaced by the most accurate procedure that implies the use of total electronic densities thus keeping consistency with the essential principle of the DFT in which the electronic density is the fundamental variable and not the molecular orbitals. As a result of the present work, the DD will be able to properly describe local reactivities only in terms of total electronic densities. To test the proposed operational formula, 12 very common molecules were selected as the original definition of the DD was not able to describe their local reactivities properly. The ethylene molecule was additionally used to test the capability of the proposed operational formula to reveal a correct local reactivity even in absence of degeneracy in frontier molecular orbitals. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Measurement of shot noise in magnetic tunnel junction and its utilization for accurate system calibration

    NASA Astrophysics Data System (ADS)

    Tamaru, S.; Kubota, H.; Yakushiji, K.; Fukushima, A.; Yuasa, S.

    2017-11-01

    This work presents a technique to calibrate the spin torque oscillator (STO) measurement system by utilizing the whiteness of shot noise. The raw shot noise spectrum in a magnetic tunnel junction based STO in the microwave frequency range is obtained by first subtracting the baseline noise, and then excluding the field dependent mag-noise components reflecting the thermally excited spin wave resonances. As the shot noise is guaranteed to be completely white, the total gain of the signal path should be proportional to the shot noise spectrum obtained by the above procedure, which allows for an accurate gain calibration of the system and a quantitative determination of each noise power. The power spectral density of the shot noise as a function of bias voltage obtained by this technique was compared with a theoretical calculation, which showed excellent agreement when the Fano factor was assumed to be 0.99.

  20. Correlation, necessity, and sufficiency: Common errors in the scientific reasoning of undergraduate students for interpreting experiments.

    PubMed

    Coleman, Aaron B; Lam, Diane P; Soowal, Lara N

    2015-01-01

    Gaining an understanding of how science works is central to an undergraduate education in biology and biochemistry. The reasoning required to design or interpret experiments that ask specific questions does not come naturally, and is an essential part of the science process skills that must be learned for an understanding of how scientists conduct research. Gaps in these reasoning skills make it difficult for students to become proficient in reading primary scientific literature. In this study, we assessed the ability of students in an upper-division biochemistry laboratory class to use the concepts of correlation, necessity, and sufficiency in interpreting experiments presented in a format and context that is similar to what they would encounter when reading a journal article. The students were assessed before and after completion of a laboratory module where necessary vs. sufficient reasoning was used to design and interpret experiments. The assessment identified two types of errors that were commonly committed by students when interpreting experimental data. When presented with an experiment that only establishes a correlation between a potential intermediate and a known effect, students frequently interpreted the intermediate as being sufficient (causative) for the effect. Also, when presented with an experiment that tests only necessity for an intermediate, they frequently made unsupported conclusions about sufficiency, and vice versa. Completion of the laboratory module and instruction in necessary vs. sufficient reasoning showed some promise for addressing these common errors. © 2015 The International Union of Biochemistry and Molecular Biology.

  1. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  2. 3D surface voxel tracing corrector for accurate bone segmentation.

    PubMed

    Guo, Haoyan; Song, Sicong; Wang, Jinke; Guo, Maozu; Cheng, Yuanzhi; Wang, Yadong; Tamura, Shinichi

    2018-06-18

    For extremely close bones, their boundaries are weak and diffused due to strong interaction between adjacent surfaces. These factors prevent the accurate segmentation of bone structure. To alleviate these difficulties, we propose an automatic method for accurate bone segmentation. The method is based on a consideration of the 3D surface normal direction, which is used to detect the bone boundary in 3D CT images. Our segmentation method is divided into three main stages. Firstly, we consider a surface tracing corrector combined with Gaussian standard deviation [Formula: see text] to improve the estimation of normal direction. Secondly, we determine an optimal value of [Formula: see text] for each surface point during this normal direction correction. Thirdly, we construct the 1D signal and refining the rough boundary along the corrected normal direction. The value of [Formula: see text] is used in the first directional derivative of the Gaussian to refine the location of the edge point along accurate normal direction. Because the normal direction is corrected and the value of [Formula: see text] is optimized, our method is robust to noise images and narrow joint space caused by joint degeneration. We applied our method to 15 wrists and 50 hip joints for evaluation. In the wrist segmentation, Dice overlap coefficient (DOC) of [Formula: see text]% was obtained by our method. In the hip segmentation, fivefold cross-validations were performed for two state-of-the-art methods. Forty hip joints were used for training in two state-of-the-art methods, 10 hip joints were used for testing and performing comparisons. The DOCs of [Formula: see text], [Formula: see text]%, and [Formula: see text]% were achieved by our method for the pelvis, the left femoral head and the right femoral head, respectively. Our method was shown to improve segmentation accuracy for several specific challenging cases. The results demonstrate that our approach achieved a superior accuracy over two

  3. Analysis of an Internet Community about Pneumothorax and the Importance of Accurate Information about the Disease.

    PubMed

    Kim, Bong Jun; Lee, Sungsoo

    2018-04-01

    The huge improvements in the speed of data transmission and the increasing amount of data available as the Internet has expanded have made it easy to obtain information about any disease. Since pneumothorax frequently occurs in young adolescents, patients often search the Internet for information on pneumothorax. This study analyzed an Internet community for exchanging information on pneumothorax, with an emphasis on the importance of accurate information and doctors' role in providing such information. This study assessed 599,178 visitors to the Internet community from June 2008 to April 2017. There was an average of 190 visitors, 2.2 posts, and 4.5 replies per day. A total of 6,513 posts were made, and 63.3% of them included questions about the disease. The visitors mostly searched for terms such as 'pneumothorax,' 'recurrent pneumothorax,' 'pneumothorax operation,' and 'obtaining a medical certification of having been diagnosed with pneumothorax.' However, 22% of the pneumothorax-related posts by visitors contained inaccurate information. Internet communities can be an important source of information. However, incorrect information about a disease can be harmful for patients. We, as doctors, should try to provide more in-depth information about diseases to patients and to disseminate accurate information about diseases in Internet communities.

  4. The need for accurate long-term measurements of water vapor in the upper troposphere and lower stratosphere with global coverage.

    PubMed

    Müller, Rolf; Kunz, Anne; Hurst, Dale F; Rolf, Christian; Krämer, Martina; Riese, Martin

    2016-02-01

    Water vapor is the most important greenhouse gas in the atmosphere although changes in carbon dioxide constitute the "control knob" for surface temperatures. While the latter fact is well recognized, resulting in extensive space-borne and ground-based measurement programs for carbon dioxide as detailed in the studies by Keeling et al. (1996), Kuze et al. (2009), and Liu et al. (2014), the need for an accurate characterization of the long-term changes in upper tropospheric and lower stratospheric (UTLS) water vapor has not yet resulted in sufficiently extensive long-term international measurement programs (although first steps have been taken). Here, we argue for the implementation of a long-term balloon-borne measurement program for UTLS water vapor covering the entire globe that will likely have to be sustained for hundreds of years.

  5. Accurate mass and velocity functions of dark matter haloes

    NASA Astrophysics Data System (ADS)

    Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly

    2017-08-01

    N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (<2 per cent level) model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z < 2.3 to push for the development of halo occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.

  6. Further Results on Sufficient LMI Conditions for H∞ Static Output Feedback Control of Discrete-Time Systems

    NASA Astrophysics Data System (ADS)

    Feng, Zhi-Yong; Xu, Li; Matsushita, Shin-Ya; Wu, Min

    Further results on sufficient LMI conditions for H∞ static output feedback (SOF) control of discrete-time systems are presented in this paper, which provide some new insights into this issue. First, by introducing a slack variable with block-triangular structure and choosing the coordinate transformation matrix properly, the conservativeness of one kind of existing sufficient LMI condition is further reduced. Then, by introducing a slack variable with linear matrix equality constraint, another kind of sufficient LMI condition is proposed. Furthermore, the relation of these two kinds of LMI conditions are revealed for the first time through analyzing the effect of different choices of coordinate transformation matrices. Finally, a numerical example is provided to demonstrate the effectiveness and merits of the proposed methods.

  7. Necessary and sufficient criterion for extremal quantum correlations in the simplest Bell scenario

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi

    2018-05-01

    In the study of quantum nonlocality, one obstacle is that the analytical criterion for identifying the boundaries between quantum and postquantum correlations has not yet been given, even in the simplest Bell scenario. We propose a plausible, analytical, necessary and sufficient condition ensuring that a nonlocal quantum correlation in the simplest scenario is an extremal boundary point. Our extremality condition amounts to certifying an information-theoretical quantity; the probability of guessing a measurement outcome of a distant party optimized using any quantum instrument. We show that this quantity can be upper and lower bounded from any correlation in a device-independent way, and we use numerical calculations to confirm that coincidence of the upper and lower bounds appears to be necessary and sufficient for the extremality.

  8. High spatial validity is not sufficient to elicit voluntary shifts of attention.

    PubMed

    Pauszek, Joseph R; Gibson, Bradley S

    2016-10-01

    Previous research suggests that the use of valid symbolic cues is sufficient to elicit voluntary shifts of attention. The present study interpreted this previous research within a broader theoretical context which contends that observers will voluntarily use symbolic cues to orient their attention in space when the temporal costs of using the cues are perceived to be less than the temporal costs of searching without the aid of the cues. In this view, previous research has not addressed the sufficiency of valid symbolic cues, because the temporal cost of using the cues is usually incurred before the target display appears. To address this concern, 70%-valid spatial word cues were presented simultaneously with a search display. In addition, other research suggests that opposing cue-dependent and cue-independent spatial biases may operate in these studies and alter standard measures of orienting. After identifying and controlling these opposing spatial biases, the results of two experiments showed that the word cues did not elicit voluntary shifts of attention when the search task was relatively easy but did when the search task was relatively difficult. Moreover, the findings also showed that voluntary use of the word cues changed over the course of the experiment when the task was difficult, presumably because the temporal cost of searching without the cue lessened as the task got easier with practice. Altogether, the present findings suggested that the factors underlying voluntary control are multifaceted and contextual, and that spatial validity alone is not sufficient to elicit voluntary shifts of attention.

  9. Accurate Mars Express orbits to improve the determination of the mass and ephemeris of the Martian moons

    NASA Astrophysics Data System (ADS)

    Rosenblatt, P.; Lainey, V.; Le Maistre, S.; Marty, J. C.; Dehant, V.; Pätzold, M.; Van Hoolst, T.; Häusler, B.

    2008-05-01

    The determination of the ephemeris of the Martian moons has benefited from observations of their plane-of-sky positions derived from images taken by cameras onboard spacecraft orbiting Mars. Images obtained by the Super Resolution Camera (SRC) onboard Mars Express (MEX) have been used to derive moon positions relative to Mars on the basis of a fit of a complete dynamical model of their motion around Mars. Since, these positions are computed from the relative position of the spacecraft when the images are taken, those positions need to be known as accurately as possible. An accurate MEX orbit is obtained by fitting two years of tracking data of the Mars Express Radio Science (MaRS) experiment onboard MEX. The average accuracy of the orbits has been estimated to be around 20-25 m. From these orbits, we have re-derived the positions of Phobos and Deimos at the epoch of the SRC observations and compared them with the positions derived by using the MEX orbits provided by the ESOC navigation team. After fit of the orbital model of Phobos and Deimos, the gain in precision in the Phobos position is roughly 30 m, corresponding to the estimated gain of accuracy of the MEX orbits. A new solution of the GM of the Martian moons has also been obtained from the accurate MEX orbits, which is consistent with previous solutions and, for Phobos, is more precise than the solution from the Mars Global Surveyor (MGS) and Mars Odyssey (ODY) tracking data. It will be further improved with data from MEX-Phobos closer encounters (at a distance less than 300 km). This study also demonstrates the advantage of combining observations of the moon positions from a spacecraft and from the Earth to assess the real accuracy of the spacecraft orbit. In turn, the natural satellite ephemerides can be improved and participate to a better knowledge of the origin and evolution of the Martian moons.

  10. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  11. Accurate integration over atomic regions bounded by zero-flux surfaces.

    PubMed

    Polestshuk, Pavel M

    2013-01-30

    The approach for the integration over a region covered by zero-flux surface is described. This approach based on the surface triangulation technique is efficiently realized in a newly developed program TWOE. The elaborated method is tested on several atomic properties including the source function. TWOE results are compared with those produced by using well-known existing programs. Absolute errors in computed atomic properties are shown to range usually from 10(-6) to 10(-5) au. The demonstrative examples prove that present realization has perfect convergence of atomic properties with increasing size of angular grid and allows to obtain highly accurate data even in the most difficult cases. It is believed that the developed program can be bridgehead that allows to implement atomic partitioning of any desired molecular property with high accuracy. Copyright © 2012 Wiley Periodicals, Inc.

  12. Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow

    NASA Astrophysics Data System (ADS)

    Lambert, Gregory; Wapperom, Peter; Baird, Donald

    2017-12-01

    Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.

  13. The use of multiple imputation for the accurate measurements of individual feed intake by electronic feeders.

    PubMed

    Jiao, S; Tiezzi, F; Huang, Y; Gray, K A; Maltecca, C

    2016-02-01

    Obtaining accurate individual feed intake records is the key first step in achieving genetic progress toward more efficient nutrient utilization in pigs. Feed intake records collected by electronic feeding systems contain errors (erroneous and abnormal values exceeding certain cutoff criteria), which are due to feeder malfunction or animal-feeder interaction. In this study, we examined the use of a novel data-editing strategy involving multiple imputation to minimize the impact of errors and missing values on the quality of feed intake data collected by an electronic feeding system. Accuracy of feed intake data adjustment obtained from the conventional linear mixed model (LMM) approach was compared with 2 alternative implementations of multiple imputation by chained equation, denoted as MI (multiple imputation) and MICE (multiple imputation by chained equation). The 3 methods were compared under 3 scenarios, where 5, 10, and 20% feed intake error rates were simulated. Each of the scenarios was replicated 5 times. Accuracy of the alternative error adjustment was measured as the correlation between the true daily feed intake (DFI; daily feed intake in the testing period) or true ADFI (the mean DFI across testing period) and the adjusted DFI or adjusted ADFI. In the editing process, error cutoff criteria are used to define if a feed intake visit contains errors. To investigate the possibility that the error cutoff criteria may affect any of the 3 methods, the simulation was repeated with 2 alternative error cutoff values. Multiple imputation methods outperformed the LMM approach in all scenarios with mean accuracies of 96.7, 93.5, and 90.2% obtained with MI and 96.8, 94.4, and 90.1% obtained with MICE compared with 91.0, 82.6, and 68.7% using LMM for DFI. Similar results were obtained for ADFI. Furthermore, multiple imputation methods consistently performed better than LMM regardless of the cutoff criteria applied to define errors. In conclusion, multiple imputation

  14. Accurate mass measurement: terminology and treatment of data.

    PubMed

    Brenton, A Gareth; Godfrey, A Ruth

    2010-11-01

    High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.

  15. Motion-based prediction is sufficient to solve the aperture problem

    PubMed Central

    Perrinet, Laurent U; Masson, Guillaume S

    2012-01-01

    In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independently of their texture. Second, we observe that incoherent features are explained away while coherent information diffuses progressively to the global scale. Most previous models included ad-hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights in the role of prediction underlying a large class of sensory computations. PMID:22734489

  16. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.

  17. An infrastructure for accurate characterization of single-event transients in digital circuits☆

    PubMed Central

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-01-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  18. Sonic hedgehog in the notochord is sufficient for patterning of the intervertebral discs.

    PubMed

    Choi, Kyung-Suk; Lee, Chanmi; Harfe, Brian D

    2012-01-01

    The intervertebral discs, located between adjacent vertebrae, are required for stability of the spine and distributing mechanical load throughout the vertebral column. All cell types located in the middle regions of the discs, called nuclei pulposi, are derived from the embryonic notochord. Recently, it was shown that the hedgehog signaling pathway plays an essential role during formation of nuclei pulposi. However, during the time that nuclei pulposi are forming, Shh is expressed in both the notochord and the nearby floor plate. To determine the source of SHH protein sufficient for formation of nuclei pulposi we removed Shh from either the floor plate or the notochord using tamoxifen-inducible Cre alleles. Removal of Shh from the floor plate resulted in phenotypically normal intervertebral discs, indicating that Shh expression in this tissue is not required for disc patterning. In addition, embryos that lacked Shh in the floor plate had normal vertebral columns, demonstrating that Shh expression in the notochord is sufficient for pattering the entire vertebral column. Removal of Shh from the notochord resulted in the absence of Shh in the floor plate, loss of intervertebral discs and vertebral structures. These data indicate that Shh expression in the notochord is sufficient for patterning of the intervertebral discs and the vertebral column. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. Sleep budgets in a globalizing world: biocultural interactions influence sleep sufficiency among Egyptian families

    PubMed Central

    Worthman, Carol M.; Brown, Ryan A.

    2013-01-01

    Declines in self-reported sleep quotas with globalizing lifestyle changes have focused attention on their possible role in rising global health problems such as obesity or depression. Cultural factors that act across the life course and support sleep sufficiency have received scant attention, nor have the potential interactions of cultural and biological factors in age-related changes in sleep behavior been systematically investigated. This study examines the effects of cultural norms for napping and sleeping arrangements along with sleep schedules, age, and gender on sleep budgets among Egyptian households. Data were collected in 2000 from 16 households with 78 members aged 3–56 years at two sites in Egypt (Cairo and an agrarian village). Each participant provided one week of continuous activity records and details of each sleep event. Records showed that nighttime sleep onsets were late and highly variable. Napping was common and, along with wake time flexibility, played a key role in maintaining sleep sufficiency throughout the life course into later middle age. Cosleeping was prevalent and exhibited contrasting associations with reduced duration and sufficiency of both nocturnal and total sleep, and with earlier, more regular, and less disrupted sleep. Daily sleep quotas met published guidelines and showed age-related changes similar to existing reports, but differed in how they were achieved. Cultural norms organizing sleep practices by age and gender appear to tap their intrinsic biological properties as well. Moreover, flexibility in how sleep was achieved contributed to sleep sufficiency. The findings suggest how biocultural dynamics can play key roles in sleep patterns that sustain favorable sleep quotas from infancy onwards in populations pursuing globalizing contemporary lifestyles. PMID:22651897

  20. 75 FR 39035 - Housing Choice Voucher (HCV) Family Self-Sufficiency (FSS) Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ...) Family Self-Sufficiency (FSS) Program AGENCY: Office of the Chief Information Officer, HUD. ACTION... Department is soliciting public comments on the subject proposal. The FSS program, which was established in... coordinate the use of public housing assistance and assistance under the Section 8 rental certificate and...

  1. Regge calculus and observations. II. Further applications.

    NASA Astrophysics Data System (ADS)

    Williams, Ruth M.; Ellis, G. F. R.

    1984-11-01

    The method, developed in an earlier paper, for tracing geodesies of particles and light rays through Regge calculus space-times, is applied to a number of problems in the Schwarzschild geometry. It is possible to obtain accurate predictions of light bending by taking sufficiently small Regge blocks. Calculations of perihelion precession, Thomas precession, and the distortion of a ball of fluid moving on a geodesic can also show good agreement with the analytic solution. However difficulties arise in obtaining accurate predictions for general orbits in these space-times. Applications to other problems in general relativity are discussed briefly.

  2. [Effect of vitamin beverages on vitamin sufficiency of the workers of Pskov Hydroelectric Power-Plant].

    PubMed

    Spiricheva, T V; Vrezhesinskaia, O A; Beketova, N A; Pereverzeva, O G; Kosheleva, O V; Kharitonchik, L A; Kodentsova, V M; Iudina, A V; Spirichev, V B

    2010-01-01

    The research of influence of vitamin complexes in the form of a drink or kissel on vitamin sufficiency of working persons has been carried out. Long inclusion (6,5 months) in a diet of vitamin drinks containing about 80% from recommended daily consumption of vitamins, was accompanied by trustworthy improvement of vitamins C and B6 sufficiency and prevention of seasonal deterioration of beta-carotene status. As initially surveyed have been well provided with vitamins A and E, their blood serum level increase had not occurred.

  3. Necessary and sufficient conditions for discrete wavelet frames in CN

    NASA Astrophysics Data System (ADS)

    Deepshikha; Vashisht, Lalit K.

    2017-07-01

    We present necessary and sufficient conditions with explicit frame bounds for a discrete wavelet system of the form {DaTk ϕ } a ∈ U(N) , k ∈IN to be a frame for the unitary space CN. It is shown that the canonical dual of a discrete wavelet frame for CN has the same structure. This is not true (well known) for canonical dual of a wavelet frame for L2(R) . Several numerical examples are given to illustrate the results.

  4. Yes, one can obtain better quality structures from routine X-ray data collection.

    PubMed

    Sanjuan-Szklarz, W Fabiola; Hoser, Anna A; Gutmann, Matthias; Madsen, Anders Østergaard; Woźniak, Krzysztof

    2016-01-01

    Single-crystal X-ray diffraction structural results for benzidine dihydrochloride, hydrated and protonated N,N,N,N-peri(dimethylamino)naphthalene chloride, triptycene, dichlorodimethyltriptycene and decamethylferrocene have been analysed. A critical discussion of the dependence of structural and thermal parameters on resolution for these compounds is presented. Results of refinements against X-ray data, cut off to different resolutions from the high-resolution data files, are compared to structural models derived from neutron diffraction experiments. The Independent Atom Model (IAM) and the Transferable Aspherical Atom Model (TAAM) are tested. The average differences between the X-ray and neutron structural parameters (with the exception of valence angles defined by H atoms) decrease with the increasing 2θmax angle. The scale of differences between X-ray and neutron geometrical parameters can be significantly reduced when data are collected to the higher, than commonly used, 2θmax diffraction angles (for Mo Kα 2θmax > 65°). The final structural and thermal parameters obtained for the studied compounds using TAAM refinement are in better agreement with the neutron values than the IAM results for all resolutions and all compounds. By using TAAM, it is still possible to obtain accurate results even from low-resolution X-ray data. This is particularly important as TAAM is easy to apply and can routinely be used to improve the quality of structural investigations [Dominiak (2015 ▸). LSDB from UBDB. University of Buffalo, USA]. We can recommend that, in order to obtain more adequate (more accurate and precise) structural and displacement parameters during the IAM model refinement, data should be collected up to the larger diffraction angles, at least, for Mo Kα radiation to 2θmax = 65° (sin θmax/λ < 0.75 Å(-1)). The TAAM approach is a very good option to obtain more adequate results even using data collected to the lower 2θmax angles. Also

  5. PKMζ is necessary and sufficient for synaptic clustering of PSD-95.

    PubMed

    Shao, Charles Y; Sondhi, Rachna; van de Nes, Paula S; Sacktor, Todd Charlton

    2012-07-01

    The persistent activity of protein kinase Mzeta (PKMζ), a brain-specific, constitutively active protein kinase C isoform, maintains synaptic long-term potentiation (LTP). Structural remodeling of the postsynaptic density is believed to contribute to the expression of LTP. We therefore examined the role of PKMζ in reconfiguring PSD-95, the major postsynaptic scaffolding protein at excitatory synapses. In primary cultures of hippocampal neurons, PKMζ activity was critical for increasing the size of PSD-95 clusters during chemical LTP (cLTP). Increasing PKMζ activity by overexpressing the kinase in hippocampal neurons was sufficient to increase PSD-95 cluster size, spine size, and postsynaptic AMPAR subunit GluA2. Overexpression of an inactive mutant of PKMζ did not increase PSD-95 clustering, and applications of the ζ-pseudosubstrate inhibitor ZIP reversed the PKMζ-mediated increases in PSD-95 clustering, indicating that the activity of PKMζ is necessary to induce and maintain the increased size of PSD-95 clusters. Thus the persistent activity of PKMζ is both necessary and sufficient for maintaining increases of PSD-95 clusters, providing a unified mechanism for long-term functional and structural modifications of synapses. Copyright © 2011 Wiley Periodicals, Inc.

  6. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  7. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  8. Interferometric Constraints on Surface Brightness Asymmetries in Long-Period Variable Stars: A Threat to Accurate Gaia Parallaxes

    NASA Astrophysics Data System (ADS)

    Sacuto, S.; Jorissen, A.; Cruzalèbes, P.; Pasquato, E.; Chiavassa, A.; Spang, A.; Rabbia, Y.; Chesneau, O.

    2011-09-01

    A monitoring of surface brightness asymmetries in evolved giants and supergiants is necessary to estimate the threat that they represent to accurate Gaia parallaxes. Closure-phase measurements obtained with AMBER/VISA in a 3-telescope configuration are fitted by a simple model to constrain the photocenter displacement. The results for the C-type star TX Psc show a large deviation of the photocenter displacement that could bias the Gaia parallax.

  9. Accurate structural and spectroscopic characterization of prebiotic molecules: The neutral and cationic acetyl cyanide and their related species.

    PubMed

    Bellili, A; Linguerri, R; Hochlaf, M; Puzzarini, C

    2015-11-14

    In an effort to provide an accurate structural and spectroscopic characterization of acetyl cyanide, its two enolic isomers and the corresponding cationic species, state-of-the-art computational methods, and approaches have been employed. The coupled-cluster theory including single and double excitations together with a perturbative treatment of triples has been used as starting point in composite schemes accounting for extrapolation to the complete basis-set limit as well as core-valence correlation effects to determine highly accurate molecular structures, fundamental vibrational frequencies, and rotational parameters. The available experimental data for acetyl cyanide allowed us to assess the reliability of our computations: structural, energetic, and spectroscopic properties have been obtained with an overall accuracy of about, or better than, 0.001 Å, 2 kcal/mol, 1-10 MHz, and 11 cm(-1) for bond distances, adiabatic ionization potentials, rotational constants, and fundamental vibrational frequencies, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be useful for guiding future experimental investigations and/or astronomical observations.

  10. Geometric derivations of minimal sets of sufficient multiview constraints

    USGS Publications Warehouse

    Thomas, Orrin H.; Oshel, Edward R.

    2012-01-01

    Geometric interpretations of four of the most common determinant formulations of multiview constraints are given, showing that they all enforce the same geometry and that all of the forms commonly in use in the machine vision community are a subset of a more general form. Generalising the work of Yi Ma yields a new general 2 x 2 determinant trilinear and 3 x 3 determinant quadlinear. Geometric descriptions of degenerate multiview constraints are given, showing that it is necessary, but insufficient, that the determinant equals zero. Understanding the degeneracies leads naturally into proofs for minimum sufficient sets of bilinear, trilinear and quadlinear constraints for arbitrary numbers of conjugate observations.

  11. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  12. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  13. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  14. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  15. Accurate determinations of one-bond 13C-13C couplings in 13C-labeled carbohydrates

    NASA Astrophysics Data System (ADS)

    Azurmendi, Hugo F.; Freedberg, Darón I.

    2013-03-01

    Carbon plays a central role in the molecular architecture of carbohydrates, yet the availability of accurate methods for 1DCC determination has not been sufficiently explored, despite the importance that such data could play in structural studies of oligo- and polysaccharides. Existing methods require fitting intensity ratios of cross- to diagonal-peaks as a function of the constant-time (CT) in CT-COSY experiments, while other methods utilize measurement of peak separation. The former strategies suffer from complications due to peak overlap, primarily in regions close to the diagonal, while the latter strategies are negatively impacted by the common occurrence of strong coupling in sugars, which requires a reliable assessment of their influence in the context of RDC determination. We detail a 13C-13C CT-COSY method that combines a variation in the CT processed with diagonal filtering to yield 1JCC and RDCs. The strategy, which relies solely on cross-peak intensity modulation, is inspired in the cross-peak nulling method used for JHH determinations, but adapted and extended to applications where, like in sugars, large one-bond 13C-13C couplings coexist with relatively small long-range couplings. Because diagonal peaks are not utilized, overlap problems are greatly alleviated. Thus, one-bond couplings can be determined from different cross-peaks as either active or passive coupling. This results in increased accuracy when more than one determination is available, and in more opportunities to measure a specific coupling in the presence of severe overlap. In addition, we evaluate the influence of strong couplings on the determination of RDCs by computer simulations. We show that individual scalar couplings are notably affected by the presence of strong couplings but, at least for the simple cases studied, the obtained RDC values for use in structural calculations were not, because the errors introduced by strong couplings for the isotropic and oriented phases are very

  16. Investigation of Professional Self Sufficiency Levels of Physical Education and Sports Teachers

    ERIC Educational Resources Information Center

    Saracaoglu, Asuman Seda; Ozsaker, Murat; Varol, Rana

    2012-01-01

    The present research aimed at detecting professional self sufficiency levels of physical education and sports teachers who worked in Izmir Province and at investigating them in terms of some variables. For data collection, Teacher's Sense of Efficacy Scale-developed by Moran and Woolfolk-Hoy (2001) and Turkish validity and reliability studies…

  17. The challenge of obtaining information necessary for multi-criteria decision analysis implementation: the case of physiotherapy services in Canada

    PubMed Central

    2013-01-01

    Background As fiscal constraints dominate health policy discussions across Canada and globally, priority-setting exercises are becoming more common to guide the difficult choices that must be made. In this context, it becomes highly desirable to have accurate estimates of the value of specific health care interventions. Economic evaluation is a well-accepted method to estimate the value of health care interventions. However, economic evaluation has significant limitations, which have lead to an increase in the use of Multi-Criteria Decision Analysis (MCDA). One key concern with MCDA is the availability of the information necessary for implementation. In the Fall 2011, the Canadian Physiotherapy Association embarked on a project aimed at providing a valuation of physiotherapy services that is both evidence-based and relevant to resource allocation decisions. The framework selected for this project was MCDA. We report on how we addressed the challenge of obtaining some of the information necessary for MCDA implementation. Methods MCDA criteria were selected and areas of physiotherapy practices were identified. The building up of the necessary information base was a three step process. First, there was a literature review for each practice area, on each criterion. The next step was to conduct interviews with experts in each of the practice areas to critique the results of the literature review and to fill in gaps where there was no or insufficient literature. Finally, the results of the individual interviews were validated by a national committee to ensure consistency across all practice areas and that a national level perspective is applied. Results Despite a lack of research evidence on many of the considerations relevant to the estimation of the value of physiotherapy services (the criteria), sufficient information was obtained to facilitate MCDA implementation at the local level. Conclusions The results of this research project serve two purposes: 1) a method to

  18. Accurate prediction of personalized olfactory perception from large-scale chemoinformatic features.

    PubMed

    Li, Hongyang; Panwar, Bharat; Omenn, Gilbert S; Guan, Yuanfang

    2018-02-01

    The olfactory stimulus-percept problem has been studied for more than a century, yet it is still hard to precisely predict the odor given the large-scale chemoinformatic features of an odorant molecule. A major challenge is that the perceived qualities vary greatly among individuals due to different genetic and cultural backgrounds. Moreover, the combinatorial interactions between multiple odorant receptors and diverse molecules significantly complicate the olfaction prediction. Many attempts have been made to establish structure-odor relationships for intensity and pleasantness, but no models are available to predict the personalized multi-odor attributes of molecules. In this study, we describe our winning algorithm for predicting individual and population perceptual responses to various odorants in the DREAM Olfaction Prediction Challenge. We find that random forest model consisting of multiple decision trees is well suited to this prediction problem, given the large feature spaces and high variability of perceptual ratings among individuals. Integrating both population and individual perceptions into our model effectively reduces the influence of noise and outliers. By analyzing the importance of each chemical feature, we find that a small set of low- and nondegenerative features is sufficient for accurate prediction. Our random forest model successfully predicts personalized odor attributes of structurally diverse molecules. This model together with the top discriminative features has the potential to extend our understanding of olfactory perception mechanisms and provide an alternative for rational odorant design.

  19. The Evolution of the Chinese Armaments Industry from 1860 to Present: The Search for Self-Sufficiency

    DTIC Science & Technology

    1986-06-06

    1 ••!•!• JVl *WF ^J in #^ The Evolution of the Chinese Armaments Industry from 1860 ^-* to Present: The Search for Self -sufficiency CD...Evolution if the Chinese Armaments Industry from 1860 to Present: The Search for Self -sufficiency 12 PERSONAL AUTHOR(S) Major Donald A. Green 13...the abil- for self - ernments from t the end of 20 DISTRIBUTION/AVAILABILITY OF ABSTRACT (3 UNCLASSIFIED/UNLIMITED • SAME AS RPT. O DTK USERS

  20. An accurate and adaptable photogrammetric approach for estimating the mass and body condition of pinnipeds using an unmanned aerial system

    PubMed Central

    Hinke, Jefferson T.; Perryman, Wayne L.; Goebel, Michael E.; LeRoi, Donald J.

    2017-01-01

    Measurements of body size and mass are fundamental to pinniped population management and research. Manual measurements tend to be accurate but are invasive and logistically challenging to obtain. Ground-based photogrammetric techniques are less invasive, but inherent limitations make them impractical for many field applications. The recent proliferation of unmanned aerial systems (UAS) in wildlife monitoring has provided a promising new platform for the photogrammetry of free-ranging pinnipeds. Leopard seals (Hydrurga leptonyx) are an apex predator in coastal Antarctica whose body condition could be a valuable indicator of ecosystem health. We aerially surveyed leopard seals of known body size and mass to test the precision and accuracy of photogrammetry from a small UAS. Flights were conducted in January and February of 2013 and 2014 and 50 photogrammetric samples were obtained from 15 unrestrained seals. UAS-derived measurements of standard length were accurate to within 2.01 ± 1.06%, and paired comparisons with ground measurements were statistically indistinguishable. An allometric linear mixed effects model predicted leopard seal mass within 19.40 kg (4.4% error for a 440 kg seal). Photogrammetric measurements from a single, vertical image obtained using UAS provide a noninvasive approach for estimating the mass and body condition of pinnipeds that may be widely applicable. PMID:29186134

  1. An accurate and adaptable photogrammetric approach for estimating the mass and body condition of pinnipeds using an unmanned aerial system.

    PubMed

    Krause, Douglas J; Hinke, Jefferson T; Perryman, Wayne L; Goebel, Michael E; LeRoi, Donald J

    2017-01-01

    Measurements of body size and mass are fundamental to pinniped population management and research. Manual measurements tend to be accurate but are invasive and logistically challenging to obtain. Ground-based photogrammetric techniques are less invasive, but inherent limitations make them impractical for many field applications. The recent proliferation of unmanned aerial systems (UAS) in wildlife monitoring has provided a promising new platform for the photogrammetry of free-ranging pinnipeds. Leopard seals (Hydrurga leptonyx) are an apex predator in coastal Antarctica whose body condition could be a valuable indicator of ecosystem health. We aerially surveyed leopard seals of known body size and mass to test the precision and accuracy of photogrammetry from a small UAS. Flights were conducted in January and February of 2013 and 2014 and 50 photogrammetric samples were obtained from 15 unrestrained seals. UAS-derived measurements of standard length were accurate to within 2.01 ± 1.06%, and paired comparisons with ground measurements were statistically indistinguishable. An allometric linear mixed effects model predicted leopard seal mass within 19.40 kg (4.4% error for a 440 kg seal). Photogrammetric measurements from a single, vertical image obtained using UAS provide a noninvasive approach for estimating the mass and body condition of pinnipeds that may be widely applicable.

  2. Calibrating GPS With TWSTFT For Accurate Time Transfer

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  3. Fourier Transform Mass Spectrometry and Nuclear Magnetic Resonance Analysis for the Rapid and Accurate Characterization of Hexacosanoylceramide.

    PubMed

    Ross, Charles W; Simonsick, William J; Bogusky, Michael J; Celikay, Recep W; Guare, James P; Newton, Randall C

    2016-06-28

    Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI), sheath flow electrospray ionization (ESI) Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS) and high-field nuclear magnetic resonance (NMR) analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.

  4. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  5. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  6. 76 FR 39115 - Notice of Proposed Information Collection: Transformation Initiative Family Self-Sufficiency...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ... Information Collection: Transformation Initiative Family Self-Sufficiency Demonstration Small Grants AGENCY... information: Title of Proposal: Notice of Funding Availability for the Transformation Initiative Family Self..., think tanks, consortia, Institutions of higher education accredited by a national or regional...

  7. BAsE-Seq: a method for obtaining long viral haplotypes from short sequence reads.

    PubMed

    Hong, Lewis Z; Hong, Shuzhen; Wong, Han Teng; Aw, Pauline P K; Cheng, Yan; Wilm, Andreas; de Sessions, Paola F; Lim, Seng Gee; Nagarajan, Niranjan; Hibberd, Martin L; Quake, Stephen R; Burkholder, William F

    2014-01-01

    We present a method for obtaining long haplotypes, of over 3 kb in length, using a short-read sequencer, Barcode-directed Assembly for Extra-long Sequences (BAsE-Seq). BAsE-Seq relies on transposing a template-specific barcode onto random segments of the template molecule and assembling the barcoded short reads into complete haplotypes. We applied BAsE-Seq on mixed clones of hepatitis B virus and accurately identified haplotypes occurring at frequencies greater than or equal to 0.4%, with >99.9% specificity. Applying BAsE-Seq to a clinical sample, we obtained over 9,000 viral haplotypes, which provided an unprecedented view of hepatitis B virus population structure during chronic infection. BAsE-Seq is readily applicable for monitoring quasispecies evolution in viral diseases.

  8. Accurate oscillator strengths for interstellar ultraviolet lines of Cl I

    NASA Technical Reports Server (NTRS)

    Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.

    1993-01-01

    Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.

  9. Navigating behavioral energy sufficiency. Results from a survey in Swiss cities on potential behavior change.

    PubMed

    Seidl, Roman; Moser, Corinne; Blumer, Yann

    2017-01-01

    Many countries have some kind of energy-system transformation either planned or ongoing for various reasons, such as to curb carbon emissions or to compensate for the phasing out of nuclear energy. One important component of these transformations is the overall reduction in energy demand. It is generally acknowledged that the domestic sector represents a large share of total energy consumption in many countries. Increased energy efficiency is one factor that reduces energy demand, but behavioral approaches (known as "sufficiency") and their respective interventions also play important roles. In this paper, we address citizens' heterogeneity regarding both their current behaviors and their willingness to realize their sufficiency potentials-that is, to reduce their energy consumption through behavioral change. We collaborated with three Swiss cities for this study. A survey conducted in the three cities yielded thematic sets of energy-consumption behavior that various groups of participants rated differently. Using this data, we identified four groups of participants with different patterns of both current behaviors and sufficiency potentials. The paper discusses intervention types and addresses citizens' heterogeneity and behaviors from a city-based perspective.

  10. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    PubMed Central

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  11. Social Work's Response to Poverty: From Benefits Dependence to Economic Self-Sufficiency

    ERIC Educational Resources Information Center

    Gates, Lauren B.; Koza, Jennifer; Akabas, Sheila H.

    2017-01-01

    Welfare reform in the 1990s represented a fundamental policy shift in the United States' response to poverty from supporting benefits dependency to promoting economic self-sufficiency. Social work's capacity to integrate this policy shift into practice is central to meeting its mission to alleviate poverty. This study looked at the preparation of…

  12. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  13. Feasibility study for image guided kidney surgery: assessment of required intraoperative surface for accurate image to physical space registrations

    NASA Astrophysics Data System (ADS)

    Benincasa, Anne B.; Clements, Logan W.; Herrell, S. Duke; Chang, Sam S.; Cookson, Michael S.; Galloway, Robert L.

    2006-03-01

    Currently, the removal of kidney tumor masses uses only direct or laparoscopic visualizations, resulting in prolonged procedure and recovery times and reduced clear margin. Applying current image guided surgery (IGS) techniques, as those used in liver cases, to kidney resections (nephrectomies) presents a number of complications. Most notably is the limited field of view of the intraoperative kidney surface, which constrains the ability to obtain a surface delineation that is geometrically descriptive enough to drive a surface-based registration. Two different phantom orientations were used to model the laparoscopic and traditional partial nephrectomy views. For the laparoscopic view, fiducial point sets were compiled from a CT image volume using anatomical features such as the renal artery and vein. For the traditional view, markers attached to the phantom set-up were used for fiducials and targets. The fiducial points were used to perform a point-based registration, which then served as a guide for the surface-based registration. Laser range scanner (LRS) obtained surfaces were registered to each phantom surface using a rigid iterative closest point algorithm. Subsets of each phantom's LRS surface were used in a robustness test to determine the predictability of their registrations to transform the entire surface. Results from both orientations suggest that about half of the kidney's surface needs to be obtained intraoperatively for accurate registrations between the image surface and the LRS surface, suggesting the obtained kidney surfaces were geometrically descriptive enough to perform accurate registrations. This preliminary work paves the way for further development of kidney IGS systems.

  14. Necessary and sufficient condition for the realization of the complex wavelet

    NASA Astrophysics Data System (ADS)

    Keita, Alpha; Qing, Qianqin; Wang, Nengchao

    1997-04-01

    Wavelet theory is a whole new signal analysis theory in recent years, and the appearance of which is attracting lots of experts in many different fields giving it a deepen study. Wavelet transformation is a new kind of time. Frequency domain analysis method of localization in can-be- realized time domain or frequency domain. It has many perfect characteristics that many other kinds of time frequency domain analysis, such as Gabor transformation or Viginier. For example, it has orthogonality, direction selectivity, variable time-frequency domain resolution ratio, adjustable local support, parsing data in little amount, and so on. All those above make wavelet transformation a very important new tool and method in signal analysis field. Because the calculation of complex wavelet is very difficult, in application, real wavelet function is used. In this paper, we present a necessary and sufficient condition that the real wavelet function can be obtained by the complex wavelet function. This theorem has some significant values in theory. The paper prepares its technique from Hartley transformation, then, it gives the complex wavelet was a signal engineering expert. His Hartley transformation, which also mentioned by Hartley, had been overlooked for about 40 years, for the social production conditions at that time cannot help to show its superiority. Only when it came to the end of 70s and the early 80s, after the development of the fast algorithm of Fourier transformation and the hardware implement to some degree, the completely some positive-negative transforming method was coming to take seriously. W transformation, which mentioned by Zhongde Wang, pushed the studying work of Hartley transformation and its fast algorithm forward. The kernel function of Hartley transformation.

  15. Fast and accurate reference-free alignment of subtomograms.

    PubMed

    Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich

    2013-06-01

    In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Do measures of surgical effectiveness at 1 year after lumbar spine surgery accurately predict 2-year outcomes?

    PubMed

    Adogwa, Owoicho; Elsamadicy, Aladine A; Han, Jing L; Cheng, Joseph; Karikari, Isaac; Bagley, Carlos A

    2016-12-01

    OBJECTIVE With the recent passage of the Patient Protection and Affordable Care Act, there has been a dramatic shift toward critical analyses of quality and longitudinal assessment of subjective and objective outcomes after lumbar spine surgery. Accordingly, the emergence and routine use of real-world institutional registries have been vital to the longitudinal assessment of quality. However, prospectively obtaining longitudinal outcomes for patients at 24 months after spine surgery remains a challenge. The aim of this study was to assess if 12-month measures of treatment effectiveness accurately predict long-term outcomes (24 months). METHODS A nationwide, multiinstitutional, prospective spine outcomes registry was used for this study. Enrollment criteria included available demographic, surgical, and clinical outcomes data. All patients had prospectively collected outcomes measures and a minimum 2-year follow-up. Patient-reported outcomes instruments (Oswestry Disability Index [ODI], SF-36, and visual analog scale [VAS]-back pain/leg pain) were completed before surgery and then at 3, 6, 12, and 24 months after surgery. The Health Transition Index of the SF-36 was used to determine the 1- and 2-year minimum clinically important difference (MCID), and logistic regression modeling was performed to determine if achieving MCID at 1 year adequately predicted improvement and achievement of MCID at 24 months. RESULTS The study group included 969 patients: 300 patients underwent anterior lumbar interbody fusion (ALIF), 606 patients underwent transforaminal lumbar interbody fusion (TLIF), and 63 patients underwent lateral interbody fusion (LLIF). There was a significant correlation between the 12- and 24-month ODI (r = 0.82; p < 0.0001), SF-36 Physical Component Summary score (r = 0.89; p < 0.0001), VAS-back pain (r = 0.90; p < 0.0001), and VAS-leg pain (r = 0.85; p < 0.0001). For the ALIF cohort, patients achieving MCID thresholds for ODI at 12 months were 13-fold (p < 0

  17. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kearny, C.H.

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy ofmore » {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify

  18. Anhydrobiotic engineering of bacterial and mammalian cells: is intracellular trehalose sufficient?

    PubMed

    Tunnacliffe, A; García de Castro, A; Manzanera, M

    2001-09-01

    Anhydrobiotic engineering aims to confer a high degree of desiccation tolerance on otherwise sensitive living organisms and cells by adopting the strategies of anhydrobiosis. Nonreducing disaccharides such as trehalose and sucrose are thought to play a pivotal role in resistance to desiccation stress in many microorganisms, invertebrates, and plants, and in vitro trehalose is known to confer stability on dried biomolecules and biomembranes. We have therefore tested the hypothesis that intracellular trehalose (or a similar molecule) may be not only necessary for anhydrobiosis but also sufficient. High concentrations of trehalose were produced in bacteria by osmotic preconditioning, and in mammalian cells by genetic engineering, but in neither system was desiccation tolerance similar to that seen in anhydrobiotic organisms, suggesting that trehalose alone is not sufficient for anhydrobiosis. In Escherichia coli such desiccation tolerance was achievable, but only when bacteria were dried in the presence of both extracellular trehalose and intracellular trehalose. In mouse L cells, improved osmotolerance was observed with up to 100 mM intracellular trehalose, but desiccation was invariably lethal even with extracellular trehalose present. We conclude that anhydrobiotic engineering of at least some microorganisms is achievable with present technology, but that further advances are needed for similar desiccation tolerance of mammalian cells. Copyright 2001 Elsevier Science (USA).

  19. Attended but unseen: visual attention is not sufficient for visual awareness.

    PubMed

    Kentridge, R W; Nijboer, T C W; Heywood, C A

    2008-02-12

    Does any one psychological process give rise to visual awareness? One candidate is selective attention-when we attend to something it seems we always see it. But if attention can selectively enhance our response to an unseen stimulus then attention cannot be a sufficient precondition for awareness. Kentridge, Heywood & Weiskrantz [Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (1999). Attention without awareness in blindsight. Proceedings of the Royal Society of London, Series B, 266, 1805-1811; Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (2004). Spatial attention speeds discrimination without awareness in blindsight. Neuropsychologia, 42, 831-835.] demonstrated just such a dissociation in the blindsight subject GY. Here, we test whether the dissociation generalizes to the normal population. We presented observers with pairs of coloured discs, each masked by the subsequent presentation of a coloured annulus. The discs acted as primes, speeding discrimination of the colour of the annulus when they matched in colour and slowing it when they differed. We show that the location of attention modulated the size of this priming effect. However, the primes were rendered invisible by metacontrast-masking and remained unseen despite being attended. Visual attention could therefore facilitate processing of an invisible target and cannot, therefore, be a sufficient precondition for visual awareness.

  20. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    PubMed

    Prinsloo, Jaco; Malekian, Reza

    2016-06-04

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  1. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    PubMed Central

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  2. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need

  3. Noniterative accurate algorithm for the exact exchange potential of density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2007-10-15

    An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less

  4. Accurate millimetre and submillimetre rest frequencies for cis- and trans-dithioformic acid, HCSSH

    NASA Astrophysics Data System (ADS)

    Prudenzano, D.; Laas, J.; Bizzocchi, L.; Lattanzi, V.; Endres, C.; Giuliano, B. M.; Spezzano, S.; Palumbo, M. E.; Caselli, P.

    2018-04-01

    Context. A better understanding of sulphur chemistry is needed to solve the interstellar sulphur depletion problem. A way to achieve this goal is to study new S-bearing molecules in the laboratory, obtaining accurate rest frequencies for an astronomical search. We focus on dithioformic acid, HCSSH, which is the sulphur analogue of formic acid. Aims: The aim of this study is to provide an accurate line list of the two HCSSH trans and cis isomers in their electronic ground state and a comprehensive centrifugal distortion analysis with an extension of measurements in the millimetre and submillimetre range. Methods: We studied the two isomers in the laboratory using an absorption spectrometer employing the frequency-modulation technique. The molecules were produced directly within a free-space cell by glow discharge of a gas mixture. We measured lines belonging to the electronic ground state up to 478 GHz, with a total number of 204 and 139 new rotational transitions, respectively, for trans and cis isomers. The final dataset also includes lines in the centimetre range available from literature. Results: The extension of the measurements in the mm and submm range lead to an accurate set of rotational and centrifugal distortion parameters. This allows us to predict frequencies with estimated uncertainties as low as 5 kHz at 1 mm wavelength. Hence, the new dataset provided by this study can be used for astronomical search. Frequency lists are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A56

  5. SPEX: a highly accurate spectropolarimeter for atmospheric aerosol characterization

    NASA Astrophysics Data System (ADS)

    Rietjens, J. H. H.; Smit, J. M.; di Noia, A.; Hasekamp, O. P.; van Harten, G.; Snik, F.; Keller, C. U.

    2017-11-01

    Global characterization of atmospheric aerosol in terms of the microphysical properties of the particles is essential for understanding the role aerosols in Earth climate [1]. For more accurate predictions of future climate the uncertainties of the net radiative forcing of aerosols in the Earth's atmosphere must be reduced [2]. Essential parameters that are needed as input in climate models are not only the aerosol optical thickness (AOT), but also particle specific properties such as the aerosol mean size, the single scattering albedo (SSA) and the complex refractive index. The latter can be used to discriminate between absorbing and non-absorbing aerosol types, and between natural and anthropogenic aerosol. Classification of aerosol types is also very important for air-quality and health-related issues [3]. Remote sensing from an orbiting satellite platform is the only way to globally characterize atmospheric aerosol at a relevant timescale of 1 day [4]. One of the few methods that can be employed for measuring the microphysical properties of aerosols is to observe both radiance and degree of linear polarization of sunlight scattered in the Earth atmosphere under different viewing directions [5][6][7]. The requirement on the absolute accuracy of the degree of linear polarization PL is very stringent: the absolute error in PL must be smaller then 0.001+0.005.PL in order to retrieve aerosol parameters with sufficient accuracy to advance climate modelling and to enable discrimination of aerosol types based on their refractive index for air-quality studies [6][7]. In this paper we present the SPEX instrument, which is a multi-angle spectropolarimeter that can comply with the polarimetric accuracy needed for characterizing aerosols in the Earth's atmosphere. We describe the implementation of spectral polarization modulation in a prototype instrument of SPEX and show results of ground based measurements from which aerosol microphysical properties are retrieved.

  6. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  7. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  8. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  9. Temporal patterns of inputs to cerebellum necessary and sufficient for trace eyelid conditioning.

    PubMed

    Kalmbach, Brian E; Ohyama, Tatsuya; Mauk, Michael D

    2010-08-01

    Trace eyelid conditioning is a form of associative learning that requires several forebrain structures and cerebellum. Previous work suggests that at least two conditioned stimulus (CS)-driven signals are available to the cerebellum via mossy fiber inputs during trace conditioning: one driven by and terminating with the tone and a second driven by medial prefrontal cortex (mPFC) that persists through the stimulus-free trace interval to overlap in time with the unconditioned stimulus (US). We used electric stimulation of mossy fibers to determine whether this pattern of dual inputs is necessary and sufficient for cerebellar learning to express normal trace eyelid responses. We find that presenting the cerebellum with one input that mimics persistent activity observed in mPFC and the lateral pontine nuclei during trace eyelid conditioning and another that mimics tone-elicited mossy fiber activity is sufficient to produce responses whose properties quantitatively match trace eyelid responses using a tone. Probe trials with each input delivered separately provide evidence that the cerebellum learns to respond to the mPFC-like input (that overlaps with the US) and learns to suppress responding to the tone-like input (that does not). This contributes to precisely timed responses and the well-documented influence of tone offset on the timing of trace responses. Computer simulations suggest that the underlying cerebellar mechanisms involve activation of different subsets of granule cells during the tone and during the stimulus-free trace interval. These results indicate that tone-driven and mPFC-like inputs are necessary and sufficient for the cerebellum to learn well-timed trace conditioned responses.

  10. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A.; Sondhi, S. L.

    2017-10-01

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.

  11. Gestation-specific changes in maternal thyroglobulin during pregnancy and lactation in an iodine-sufficient region in China: a longitudinal study.

    PubMed

    Zhang, Xiaowen; Li, Chenyan; Mao, Jinyuan; Wang, Weiwei; Xie, Xiaochen; Peng, Shiqiao; Wang, Zhaojun; Han, Cheng; Zhang, Xiaomei; Wang, Danyang; Fan, Chenling; Shan, Zhongyan; Teng, Weiping

    2017-02-01

    To describe the changes in thyroglobulin (Tg) based upon gestational and postpartum concentrations in healthy pregnant women from an iodine-sufficient region in China, and to evaluate the use of Tg as a biomarker for iodine-sufficient pregnant women. A longitudinal study of Tg change in normal pregnant women from an iodine-sufficient region. Blood and urine samples were obtained from 133 pregnant women. Urinary iodine concentration (UIC) was measured using an ammonium persulfate method. Serum iodine concentration was required by inductively coupled plasma mass spectrometry (ICP-MS). Serum thyroid-stimulating hormone (TSH), free thyroxine (FT4), free triiodothyronine (FT3), total thyroxine (TT4), total triiodothyronine (TT3), antithyroid peroxidase antibody (TPOAb), antithyroglobulin antibody (TgAb) and Tg were measured using an electrochemiluminescence immunoassay. Thyroglobulin concentrations were higher in early pregnancy (pregnancy at 8 weeks vs nonpregnancy: 11·42 ng/ml vs 8·8 ng/ml, P < 0·01) and maintained a stable level, and then increased greatly at the 36th week. After delivery, Tg decreased to nonpregnant levels. During pregnancy, maternal Tg was not correlated with thyroid function, UIC or urine iodine-creatinine ratio (UI/Cr). Cord blood Tg was much higher compared to maternal Tg levels at the 36w (57·34 vs 14·86 ng/ml, P < 0·001) and correlated positively with cord FT4 (r = 0·256, P < 0·05), cord TT4 (r = 0·263, P < 0·05) and maternal UI/Cr at 36w (r = -0·214, P < 0·05). Our work demonstrates that Tg is elevated during pregnancy, and the effect of pregnancy should be taken into consideration when Tg is used as a biomarker for the iodine status. Cord blood Tg is much higher than maternal Tg levels at the 36w and is correlated with maternal iodine status. © 2016 John Wiley & Sons Ltd.

  12. The Development of University Students' Self-Sufficiency Based on Interactive Technologies by Their Immersion in the Professional Activity

    ERIC Educational Resources Information Center

    Ljubimova, Elena Mikhaelovna; Galimullina, Elvira Zufarovna; Ibatullin, Rinat Rivkatovich

    2015-01-01

    The article discusses the problems of using web technologies in the development of self-sufficiency of University students. We hypothesize that real professional situations in which he/she is obliged to work independently on the basis of web technologies contribute to the development of students' self-sufficiency. It is shown that the activity…

  13. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  14. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  15. 49 CFR 40.263 - What happens when an employee is unable to provide a sufficient amount of saliva for an alcohol...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... sufficient amount of saliva for an alcohol screening test? (a) As the STT, you must take the following steps if an employee is unable to provide sufficient saliva to complete a test on a saliva screening device (e.g., the employee does not provide sufficient saliva to activate the device). (1) You must conduct...

  16. Mobility-based correction for accurate determination of binding constants by capillary electrophoresis-frontal analysis.

    PubMed

    Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y

    2017-06-01

    Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Accurate, robust and reliable calculations of Poisson-Boltzmann binding energies

    PubMed Central

    Nguyen, Duc D.; Wang, Bao

    2017-01-01

    Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, ΔGel, and binding free energy, ΔΔGel, is important to computational biophysics and biochemistry. In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of ΔGel obtained at the grid spacing of 1.0 Å compared to ΔGel at 0.2 Å averaged over 153 molecules is less than 0.2%. Our results indicate that the use of grid spacing 0.6 Å ensures accuracy and reliability in ΔΔGel calculation. In fact, the grid spacing of 1.1 Å appears to deliver adequate accuracy for high throughput screening. PMID:28211071

  18. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  19. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  20. Are patients referred to rehabilitation diagnosed accurately?

    PubMed

    Tederko, Piotr; Krasuski, Marek; Nyka, Izabella; Mycielski, Jerzy; Tarnacka, Beata

    2017-07-17

    An accurate diagnosis of the leading health condition and comorbidities is a prerequisite for safe and effective rehabilitation. The problem of diagnostic errors in physical and rehabilitation medicine (PRM) has not been addressed sufficiently. The responsibility of a referring physician is to determine indications and contraindications for rehabilitation. To assess the rate of and risk factors for inaccurate referral diagnoses (RD) in patients referred to a rehabilitation facility. We hypothesized that inaccurate RD would be more common in patients 1) referred by non-PRM physicians; 2) waiting longer for the admission; 3) older patients. Retrospective observational study. 1000 randomly selected patients admitted between 2012 and 2016 to a day- rehabilitation center (DRC). University DRC specialized in musculoskeletal diseases. On admission all cases underwent clinical verification of RD. Inappropriateness regarding primary diagnoses and comorbidities were noted. Influence of several factors affecting probability of inaccurate RD was analyzed with multiple binary regression model applied to 6 categories of diseases. The rate of inaccurate RD was 25.2%. Higher frequency of inaccurate RD was noted among patients referred by non-PRM specialists (30.3% vs 17.3% in cases referred by PRM specialists). Application of logit regression showed highly significant influence of the specialty of a referring physician on the odds of inaccurate RD (joint Wald test ch2(6)=38.98, p- value=0.000), controlling for the influence of other variables. This may reflect a suboptimal knowledge of the rehabilitation process and a tendency to neglect of comorbidities by non-PRM specialists. The rate of inaccurate RD did not correlate with time between referral and admission (joint Wald test of all odds ratios equal to 1, chi2(6)=5.62, p-value=0.467), however, mean and median waiting times were relatively short (35.7 and 25 days respectively).A high risk of overlooked multimorbidity was

  1. Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana

    2017-03-01

    Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.

  2. A Fiji multi-coral δ18O composite approach to obtaining a more accurate reconstruction of the last two-centuries of the ocean-climate variability in the South Pacific Convergence Zone region

    NASA Astrophysics Data System (ADS)

    Dassié, Emilie P.; Linsley, Braddock K.; Corrège, Thierry; Wu, Henry C.; Lemley, Gavin M.; Howe, Steve; Cabioch, Guy

    2014-12-01

    The limited availability of oceanographic data in the tropical Pacific Ocean prior to the satellite era makes coral-based climate reconstructions a key tool for extending the instrumental record back in time, thereby providing a much needed test for climate models and projections. We have generated a unique regional network consisting of five Porites coral δ18O time series from different locations in the Fijian archipelago. Our results indicate that using a minimum of three Porites coral δ18O records from Fiji is statistically sufficient to obtain a reliable signal for climate reconstruction, and that application of an approach used in tree ring studies is a suitable tool to determine this number. The coral δ18O composite indicates that while sea surface temperature (SST) variability is the primary driver of seasonal δ18O variability in these Fiji corals, annual average coral δ18O is more closely correlated to sea surface salinity (SSS) as previously reported. Our results highlight the importance of water mass advection in controlling Fiji coral δ18O and salinity variability at interannual and decadal time scales despite being located in the heavy rainfall region of the South Pacific Convergence Zone (SPCZ). The Fiji δ18O composite presents a secular freshening and warming trend since the 1850s coupled with changes in both interannual (IA) and decadal/interdecadal (D/I) variance. The changes in IA and D/I variance suggest a re-organization of climatic variability in the SPCZ region beginning in the late 1800s to period of a more dominant interannual variability, which could correspond to a southeast expansion of the SPCZ.

  3. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  4. Auditory processing deficits are sometimes necessary and sometimes sufficient for language difficulties in children: Evidence from mild to moderate sensorineural hearing loss.

    PubMed

    Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart

    2017-09-01

    There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to

  5. ACCURATE: Greenhouse Gas Profiles Retrieval from Combined IR-Laser and Microwave Occultation Measurements

    NASA Astrophysics Data System (ADS)

    Proschek, Veronika; Kirchengast, Gottfried; Schweitzer, Susanne; Fritzer, Johannes

    2010-05-01

    absorption line and one being a close-by reference outside of any absorption lines. The reference signal is used to remove atmospheric broadband" effects by this differential absorption" approach. Refractivity and impact parameter of the LIO signals, needed for the retrieval, can be computed from the LMO-derived thermodynamic profiles. An Abel Transform converts the differential LIO log-transmission profile to the absorption coefficient. Based on the absorption coefficient and the absorption cross section of the GHG under investigation, that can as well be computed from the LMO-derived profiles, the number density profile or volume mixing ratio of the desired GHG can be finally derived. When using several LIO signals, best sensitive to the same GHG at different heights, a joint optimal GHG profile can be constructed by combining the individual profiles in an inverse-variance-weighted manner (practically used for H2O, obtained from 3-4 signals, and for CO2, obtained from 2 isotope signals). The thermodynamic parameters (temperature, pressure and humidity) derived from LMO as basis for the LIO retrieval are found to be accurate to better than 0.5 K for temperature, 0.2% for pressure, and 10% for humidity. The accuracy of retrieved trace species profiles is found better than 1% to 4% for single profiles in the UTLS region (outside clouds which block infrared) and the profiles are essentially unbiased (biases

  6. Mating motives are neither necessary nor sufficient to create the beauty premium.

    PubMed

    Hafenbrädl, Sebastian; Dana, Jason

    2017-01-01

    Mating motives lead decision makers to favor attractive people, but this favoritism is not sufficient to create a beauty premium in competitive settings. Further, economic approaches to discrimination, when correctly characterized, could neatly accommodate the experimental and field evidence of a beauty premium. Connecting labor economics and evolutionary psychology is laudable, but mating motives do not explain the beauty premium.

  7. Prevalence and determinants of sufficient fruit and vegetable consumption among primary school children in Nakhon Pathom, Thailand

    PubMed Central

    Piaseu, Noppawan

    2017-01-01

    BACKGROUND/OBJECTIVES Low consumption of fruit and vegetable is frequently viewed as an important contributor to obesity risk. With increasing childhood obesity and relatively low fruit and vegetable consumption among Thai children, there is a need to identify the determinants of the intake to promote fruit and vegetable consumption effectively. SUBJECTS/METHODS This cross-sectional study was conducted at two conveniently selected primary schools in Nakhon Pathom. A total of 609 students (grade 4-6) completed questionnaires on personal and environmental factors. Adequate fruit and vegetable intakes were defined as a minimum of three servings of fruit or vegetable daily, and adequate total intake as at least 6 serves of fruit and vegetable daily. Data were analyzed using descriptive statistics, the chi-square test, and multiple logistic regression. RESULTS The proportion of children with a sufficient fruit and/or vegetable intakes was low. Covariates of child's personal and environmental factors showed significant associations with sufficient intakes of fruit and/or vegetable (P < 0.05). Logistic regression analyses showed that the following factors were positively related to sufficient intake of vegetable; lower grade, a positive attitude toward vegetable, and fruit availability at home; and that greater maternal education, a positive child's attitude toward vegetable, and fruit availability at home were significantly associated with sufficient consumption of fruits and total fruit and vegetable intake. CONCLUSIONS The present study showed that personal factors like attitude toward vegetables and socio-environmental factors, such as, greater availability of fruits were significantly associated with sufficient fruit and vegetable consumption. The importance of environmental and personal factors to successful nutrition highlights the importance of involving parents and schools. PMID:28386386

  8. Prevalence and determinants of sufficient fruit and vegetable consumption among primary school children in Nakhon Pathom, Thailand.

    PubMed

    Hong, Seo Ah; Piaseu, Noppawan

    2017-04-01

    Low consumption of fruit and vegetable is frequently viewed as an important contributor to obesity risk. With increasing childhood obesity and relatively low fruit and vegetable consumption among Thai children, there is a need to identify the determinants of the intake to promote fruit and vegetable consumption effectively. This cross-sectional study was conducted at two conveniently selected primary schools in Nakhon Pathom. A total of 609 students (grade 4-6) completed questionnaires on personal and environmental factors. Adequate fruit and vegetable intakes were defined as a minimum of three servings of fruit or vegetable daily, and adequate total intake as at least 6 serves of fruit and vegetable daily. Data were analyzed using descriptive statistics, the chi-square test, and multiple logistic regression. The proportion of children with a sufficient fruit and/or vegetable intakes was low. Covariates of child's personal and environmental factors showed significant associations with sufficient intakes of fruit and/or vegetable ( P < 0.05). Logistic regression analyses showed that the following factors were positively related to sufficient intake of vegetable; lower grade, a positive attitude toward vegetable, and fruit availability at home; and that greater maternal education, a positive child's attitude toward vegetable, and fruit availability at home were significantly associated with sufficient consumption of fruits and total fruit and vegetable intake. The present study showed that personal factors like attitude toward vegetables and socio-environmental factors, such as, greater availability of fruits were significantly associated with sufficient fruit and vegetable consumption. The importance of environmental and personal factors to successful nutrition highlights the importance of involving parents and schools.

  9. Parkinsonian rest tremor can be detected accurately based on neuronal oscillations recorded from the subthalamic nucleus.

    PubMed

    Hirschmann, J; Schoffelen, J M; Schnitzler, A; van Gerven, M A J

    2017-10-01

    To investigate the possibility of tremor detection based on deep brain activity. We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10 PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual power features. Applying a threshold directly to band-limited power was insufficient for tremor detection (mean area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained from a single contact pair. Within-patient training yielded better accuracy than across-patient training (0.84vs. 0.78, p=0.03), yet tremor could often be detected accurately with either approach. High frequency oscillations (>200Hz) were the best performing individual feature. LFP-based markers of tremor are robust enough to allow for accurate tremor detection in short data segments, provided that appropriate statistical models are used. LFP-based markers of tremor could be useful control signals for closed-loop deep brain stimulation. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  10. The necessary and sufficient conditions of therapeutic personality change: Reactions to Rogers' 1957 article.

    PubMed

    Samstag, Lisa Wallner

    2007-09-01

    Carl Rogers' article (see record 2007-14639-002) on the necessary and sufficient conditions for personality change has had a significant impact on the field of psychotherapy and psychotherapy research. He emphasized the client as arbiter of his or her own subjective experience and tested his hypothesized therapist-offered conditions of change using recorded sessions. This aided in demystifying the therapeutic process and led to a radical shift in the listening stance of the therapist. I briefly outline my views regarding the influence of the ideas presented in this work, describe the intellectual and cultural context of the times, and discuss a number of ways in which the therapist-offered conditions for psychological transformation are neither necessary nor sufficient. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  11. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  12. Analysis of the electromagnetic scattering from an inlet geometry with lossy walls

    NASA Technical Reports Server (NTRS)

    Myung, N. H.; Pathak, P. H.; Chunang, C. D.

    1985-01-01

    One of the primary goals is to develop an approximate but sufficiently accurate analysis for the problem of electromagnetic (EM) plane wave scattering by an open ended, perfectly-conducting, semi-infinite hollow circular waveguide (or duct) with a thin, uniform layer of lossy or absorbing material on its inner wall, and with a simple termination inside. The less difficult but useful problem of the EM scattering by a two-dimensional (2-D), semi-infinite parallel plate waveguide with an impedance boundary condition on the inner walls was chosen initially for analysis. The impedance boundary condition in this problem serves to model a thin layer of lossy dielectric/ferrite coating on the otherwise perfectly-conducting interior waveguide walls. An approximate but efficient and accurate ray solution was obtained recently. That solution is presently being extended to the case of a moderately thick dielectric/ferrite coating on the walls so as to be valid for situations where the impedance boundary condition may not remain sufficiently accurate.

  13. Basolateral junctions are sufficient to suppress epithelial invasion during Drosophila oogenesis.

    PubMed

    Szafranski, Przemyslaw; Goode, Scott

    2007-02-01

    Epithelial junctions play crucial roles during metazoan evolution and development by facilitating tissue formation, maintenance, and function. Little is known about the role of distinct types of junctions in controlling epithelial transformations leading to invasion of neighboring tissues. Discovering the key junction complexes that control these processes and how they function may also provide mechanistic insight into carcinoma cell invasion. Here, using the Drosophila ovary as a model, we show that four proteins of the basolateral junction (BLJ), Fasciclin-2, Neuroglian, Discs-large, and Lethal-giant-larvae, but not proteins of other epithelial junctions, directly suppress epithelial tumorigenesis and invasion. Remarkably, the expression pattern of Fasciclin-2 predicts which cells will invade. We compared the apicobasal polarity of BLJ tumor cells to border cells (BCs), an epithelium-derived cluster that normally migrates during mid-oogenesis. Both tumor cells and BCs differentiate a lateralized membrane pattern that is necessary but not sufficient for invasion. Independent of lateralization, derepression of motility pathways is also necessary, as indicated by a strong linear correlation between faster BC migration and an increased incidence of tumor invasion. However, without membrane lateralization, derepression of motility pathways is also not sufficient for invasion. Our results demonstrate that spatiotemporal patterns of basolateral junction activity directly suppress epithelial invasion by organizing the cooperative activity of distinct polarity and motility pathways.

  14. Necessary and sufficient liveness condition of GS3PR Petri nets

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Barkaoui, Kamel

    2015-05-01

    Structural analysis is one of the most important and efficient methods to investigate the behaviour of Petri nets. Liveness is a significant behavioural property of Petri nets. Siphons, as structural objects of a Petri net, are closely related to its liveness. Many deadlock control policies for flexible manufacturing systems (FMS) modelled by Petri nets are implemented via siphon control. Most of the existing methods design liveness-enforcing supervisors by adding control places for siphons based on their controllability conditions. To compute a liveness-enforcing supervisor with as much as permissive behaviour, it is both theoretically and practically significant to find an exact controllability condition for siphons. However, the existing conditions, max, max‧, and max″-controllability of siphons are all overly restrictive and generally sufficient only. This paper develops a new condition called max*-controllability of the siphons in generalised systems of simple sequential processes with resources (GS3PR), which are a net subclass that can model many real-world automated manufacturing systems. We show that a GS3PR is live if all its strict minimal siphons (SMS) are max*-controlled. Compared with the existing conditions, i.e., max-, max‧-, and max″-controllability of siphons, max*-controllability of the SMS is not only sufficient but also necessary. An example is used to illustrate the proposed method.

  15. An accurate and efficient computational protocol for obtaining the complete basis set limits of the binding energies of water clusters at the MP2 and CCSD(T) levels of theory: Application to (H₂O) m, m=2-6, 8, 11, 16 and 17

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-06-21

    We report MP2 and CCSD(T) binding energies with basis sets up to pentuple zeta quality for the m = 2-6, 8 clusters. Or best CCSD(T)/CBS estimates are -4.99 kcal/mol (dimer), -15.77 kcal/mol (trimer), -27.39 kcal/mol (tetramer), -35.9 ± 0.3 kcal/mol (pentamer), -46.2 ± 0.3 kcal/mol (prism hexamer), -45.9 ± 0.3 kcal/mol (cage hexamer), -45.4 ± 0.3 kcal/mol (book hexamer), -44.3 ± 0.3 kcal/mol (ring hexamer), -73.0 ± 0.5 kcal/mol (D 2d octamer) and -72.9 ± 0.5 kcal/mol (S4 octamer). We have found that the percentage of both the uncorrected (dimer) and BSSE-corrected (dimer CP e) binding energies recovered with respectmore » to the CBS limit falls into a narrow range for each basis set for all clusters and in addition this range was found to decrease upon increasing the basis set. Relatively accurate estimates (within < 0.5%) of the CBS limits can be obtained when using the “ 2/3, 1/3” (for the AVDZ set) or the “½ , ½” (for the AVTZ, AVQZ and AV5Z sets) mixing ratio between dimer e and dimer CPe. Based on those findings we propose an accurate and efficient computational protocol that can be used to estimate accurate binding energies of clusters at the MP2 (for up to 100 molecules) and CCSD(T) (for up to 30 molecules) levels of theory. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multi program national laboratory operated for DOE by Battelle. This research also used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. AC02-05CH11231.« less

  16. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    PubMed

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  17. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  18. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  19. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  20. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  1. Accurate, Rapid Taxonomic Classification of Fungal Large-Subunit rRNA Genes

    PubMed Central

    Liu, Kuan-Liang; Porras-Alfaro, Andrea; Eichorst, Stephanie A.

    2012-01-01

    Taxonomic and phylogenetic fingerprinting based on sequence analysis of gene fragments from the large-subunit rRNA (LSU) gene or the internal transcribed spacer (ITS) region is becoming an integral part of fungal classification. The lack of an accurate and robust classification tool trained by a validated sequence database for taxonomic placement of fungal LSU genes is a severe limitation in taxonomic analysis of fungal isolates or large data sets obtained from environmental surveys. Using a hand-curated set of 8,506 fungal LSU gene fragments, we determined the performance characteristics of a naïve Bayesian classifier across multiple taxonomic levels and compared the classifier performance to that of a sequence similarity-based (BLASTN) approach. The naïve Bayesian classifier was computationally more rapid (>460-fold with our system) than the BLASTN approach, and it provided equal or superior classification accuracy. Classifier accuracies were compared using sequence fragments of 100 bp and 400 bp and two different PCR primer anchor points to mimic sequence read lengths commonly obtained using current high-throughput sequencing technologies. Accuracy was higher with 400-bp sequence reads than with 100-bp reads. It was also significantly affected by sequence location across the 1,400-bp test region. The highest accuracy was obtained across either the D1 or D2 variable region. The naïve Bayesian classifier provides an effective and rapid means to classify fungal LSU sequences from large environmental surveys. The training set and tool are publicly available through the Ribosomal Database Project (http://rdp.cme.msu.edu/classifier/classifier.jsp). PMID:22194300

  2. Physiological and transcriptional responses of Catalpa bungei to drought stress under sufficient- and deficient-nitrogen conditions.

    PubMed

    Shi, Huili; Ma, Wenjun; Song, Junyu; Lu, Mei; Rahman, Siddiq Ur; Bui, Thi Tuyet Xuan; Vu, Dinh Duy; Zheng, Huifang; Wang, Junhui; Zhang, Yi

    2017-11-01

    Many semi-arid ecosystems are simultaneously limited by soil water and nitrogen (N). We conducted a greenhouse experiment to address how N availability impacts drought-resistant traits of Catalpa bungei C. A. Mey at the physiological and molecular level. A factorial design was used, consisting of sufficient-N and deficient-N combined with moderate drought and well-watered conditions. Seedling biomass and major root parameters were significantly suppressed by drought under the deficient-N condition, whereas N application mitigated the inhibiting effects of drought on root growth, particularly that of fine roots with a diameter <0.2 mm. Intrinsic water-use efficiency was promoted by N addition under both water conditions, whereas stable carbon isotope compositions (δ13C) was promoted by N addition only under the well-watered condition. Nitrogen application positively impacted drought adaptive responses including osmotic adjustment and homeostasis of reactive oxygen species, the content of free proline, soluble sugar and superoxide dismutase activity: all were increased upon drought under sufficient-N conditions but not under deficient-N conditions. The extent of abscisic acid (ABA) inducement upon drought was elevated by N application. Furthermore, an N-dependent crosstalk between ABA, jasmonic acid and indole acetic acid at the biosynthesis level contributed to better drought acclimation. Moreover, the transcriptional level of most genes responsible for the ABA signal transduction pathway, and genes encoding the antioxidant enzymes and plasma membrane intrinsic proteins, are elevated upon drought only under sufficient-N addition. These observations confirmed at the molecular level that major adaptive responses to drought are dependent on sufficient N nutrition. Although N uptake was decreased under drought, N-use efficiency and transcription of most genes encoding N metabolism enzymes were elevated, demonstrating that active N metabolism positively contributed

  3. An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1990-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.

  4. Design of fuel cell powered data centers for sufficient reliability and availability

    NASA Astrophysics Data System (ADS)

    Ritchie, Alexa J.; Brouwer, Jacob

    2018-04-01

    It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.

  5. Spatial Resolution Requirements for Accurate Identification of Drivers of Atrial Fibrillation

    PubMed Central

    Roney, Caroline H.; Cantwell, Chris D.; Bayer, Jason D.; Qureshi, Norman A.; Lim, Phang Boon; Tweedy, Jennifer H.; Kanagaratnam, Prapa; Vigmond, Edward J.; Ng, Fu Siong

    2017-01-01

    Background— Recent studies have demonstrated conflicting mechanisms underlying atrial fibrillation (AF), with the spatial resolution of data often cited as a potential reason for the disagreement. The purpose of this study was to investigate whether the variation in spatial resolution of mapping may lead to misinterpretation of the underlying mechanism in persistent AF. Methods and Results— Simulations of rotors and focal sources were performed to estimate the minimum number of recording points required to correctly identify the underlying AF mechanism. The effects of different data types (action potentials and unipolar or bipolar electrograms) and rotor stability on resolution requirements were investigated. We also determined the ability of clinically used endocardial catheters to identify AF mechanisms using clinically recorded and simulated data. The spatial resolution required for correct identification of rotors and focal sources is a linear function of spatial wavelength (the distance between wavefronts) of the arrhythmia. Rotor localization errors are larger for electrogram data than for action potential data. Stationary rotors are more reliably identified compared with meandering trajectories, for any given spatial resolution. All clinical high-resolution multipolar catheters are of sufficient resolution to accurately detect and track rotors when placed over the rotor core although the low-resolution basket catheter is prone to false detections and may incorrectly identify rotors that are not present. Conclusions— The spatial resolution of AF data can significantly affect the interpretation of the underlying AF mechanism. Therefore, the interpretation of human AF data must be taken in the context of the spatial resolution of the recordings. PMID:28500175

  6. Understanding Self-Sufficiency of Welfare Leavers in Illinois: Elaborating Models with Psychosocial Factors.

    ERIC Educational Resources Information Center

    Julnes, George; Fan, Xitao; Hayashi, Kentaro

    2001-01-01

    Used survey (for 1,001 adults) and administrative data (for 137,330 first-exit cases) in structural equation modeling to examine psychological and social factors as determinants of welfare dependency and self-sufficiency. Findings show well-being to be a predictor of low recidivism and high employment. (SLD)

  7. Pathway to Self-Sufficiency: Social and Economic Development Strategies of Native American Communities.

    ERIC Educational Resources Information Center

    Office of Human Development Services (DHHS), Washington, DC.

    In fiscal year (FY) 1984 the Administration for Native Americans awarded 227 grants for social and economic development strategies (SEDS) which would help Native American communities move toward self-sufficiency. More than half the grants were primarily for economic development; approximately one-third were for improving tribal governments, and…

  8. From Poverty to Self-Sufficiency: The Role of Postsecondary Education in Welfare Reform.

    ERIC Educational Resources Information Center

    Center for Women Policy Studies, Washington, DC.

    This report provides policymakers with information necessary to demonstrate that postsecondary education is an effective route from poverty to true self-sufficiency and prosperity for low-income women. It discusses the impact of the 1996 Temporary Assistance for Needy Families (TANF) statute and the TANF reauthorization bill, the Personal…

  9. On the role of budget sufficiency, cost efficiency, and uncertainty in species management

    USGS Publications Warehouse

    van der Burg, Max Post; Bly, Bartholomew B.; Vercauteren, Tammy; Grand, James B.; Tyre, Andrew J.

    2014-01-01

    Many conservation planning frameworks rely on the assumption that one should prioritize locations for management actions based on the highest predicted conservation value (i.e., abundance, occupancy). This strategy may underperform relative to the expected outcome if one is working with a limited budget or the predicted responses are uncertain. Yet, cost and tolerance to uncertainty rarely become part of species management plans. We used field data and predictive models to simulate a decision problem involving western burrowing owls (Athene cunicularia hypugaea) using prairie dog colonies (Cynomys ludovicianus) in western Nebraska. We considered 2 species management strategies: one maximized abundance and the other maximized abundance in a cost-efficient way. We then used heuristic decision algorithms to compare the 2 strategies in terms of how well they met a hypothetical conservation objective. Finally, we performed an info-gap decision analysis to determine how these strategies performed under different budget constraints and uncertainty about owl response. Our results suggested that when budgets were sufficient to manage all sites, the maximizing strategy was optimal and suggested investing more in expensive actions. This pattern persisted for restricted budgets up to approximately 50% of the sufficient budget. Below this budget, the cost-efficient strategy was optimal and suggested investing in cheaper actions. When uncertainty in the expected responses was introduced, the strategy that maximized abundance remained robust under a sufficient budget. Reducing the budget induced a slight trade-off between expected performance and robustness, which suggested that the most robust strategy depended both on one's budget and tolerance to uncertainty. Our results suggest that wildlife managers should explicitly account for budget limitations and be realistic about their expected levels of performance.

  10. Expert Consensus Statement on achieving self-sufficiency in safe blood and blood products, based on voluntary non-remunerated blood donation (VNRBD).

    PubMed

    2012-11-01

    All countries face challenges in making sufficient supplies of blood and blood products available and sustainable, while also ensuring the quality and safety of these products in the face of known and emerging threats to public health. Since 1975, the World Health Assembly (WHA) has highlighted the global need for blood safety and availability. WHA resolutions 63·12, 58·13 and 28·72, The Melbourne Declaration on 100% Voluntary Non-Remunerated Donation of Blood and Blood Components and WHO Global Blood Safety Network recommendations have reaffirmed the achievement of 'Self-sufficiency in blood and blood products based on voluntary non-remunerated blood donation (VNRBD)' as the important national policy direction for ensuring a safe, secure and sufficient supply of blood and blood products, including labile blood components and plasma-derived medicinal products. Despite some successes, self-sufficiency is not yet a reality in many countries. A consultation of experts, convened by the World Health Organization (WHO) in September 2011 in Geneva, Switzerland, addressed the urgent need to establish strategies and mechanisms for achieving self-sufficiency. Information on the current situation, and country perspectives and experiences were shared. Factors influencing the global implementation of self-sufficiency, including safety, ethics, security and sustainability of supply, trade and its potential impact on public health, availability and access for patients, were analysed to define strategies and mechanisms and provide practical guidance on achieving self-sufficiency. Experts developed a consensus statement outlining the rationale and definition of self-sufficiency in safe blood and blood products based on VNRBD and made recommendations to national health authorities and WHO. © 2012 World Health Organization. Vox Sanguinis © 2012 International Society of Blood Transfusion.

  11. 41 CFR 102-84.20 - Where should I obtain the data required to be reported for the Annual Real Property Inventory?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reported for the Annual Real Property Inventory from the most accurate real property asset management and... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Where should I obtain... Public Contracts and Property Management Federal Property Management Regulations System (Continued...

  12. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    PubMed

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  13. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  14. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8

  15. Educational achievement and economic self-sufficiency in adults after childhood bacterial meningitis.

    PubMed

    Roed, Casper; Omland, Lars Haukali; Skinhoj, Peter; Rothman, Kenneth J; Sorensen, Henrik Toft; Obel, Niels

    2013-04-24

    To our knowledge, no previous study has examined functioning in adult life among persons who had bacterial meningitis in childhood. To study educational achievement and economic self-sufficiency in adults diagnosed as having bacterial meningitis in childhood. Nationwide population-based cohort study using national registries of Danish-born children diagnosed as having meningococcal, pneumococcal, or Haemophilus influenzae meningitis in the period 1977-2007 (n=2784 patients). Comparison cohorts from the same population individually matched on age and sex were identified, as were siblings of all study participants. End of study period was 2010. Cumulative incidences of completed vocational education, high school education, higher education, time to first full year of economic self-sufficiency, and receipt of disability pension and differences in these outcomes at age 35 years among meningitis patients, comparison cohorts, and siblings. By age 35 years, among persons who had a history of childhood meningococcal (n=1338), pneumococcal (n=455), and H. influenzae (n=991) meningitis, an estimated 11.0% (41.5% vs 52.5%; 95% CI, 7.3%-14.7%), 10.2% (42.6% vs 52.8%; 95% CI, 3.8%-16.6%), and 5.5% (47.7% vs 53.2%; 95% CI, 1.9%-9.1%) fewer persons, respectively, had completed high school and 7.9% (29.3% vs 37.2%; 95% CI, 1.6%-14.2%), 8.9% (28.1% vs 37.0%; 95% CI, 0.6%-17.2%), and 6.5% (33.5% vs 40.0%; 95% CI, 1.4%-11.6%) fewer had attained a higher education compared with individuals from the comparison cohort. Siblings of meningococcal meningitis patients also had lower educational achievements, while educational achievements of siblings of pneumococcal and H. influenzae meningitis patients did not differ substantially from those in the general population. At end of follow-up, 3.8% (90.3% vs 94.1%; 95% CI, 1.1%-6.5%), 10.6% (84.0% vs 94.6%; 95% CI, 5.1%-16.1%), and 4.3% (90.6% vs 94.9%; 95% CI, 2.0%-6.6%) fewer meningococcal, pneumococcal, and H. influenzae meningitis patients

  16. 41 CFR 102-5.95 - Is the comfort and/or convenience of an employee considered sufficient justification to authorize...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... convenience of an employee considered sufficient justification to authorize home-to-work transportation? 102-5...-to-Work Transportation § 102-5.95 Is the comfort and/or convenience of an employee considered sufficient justification to authorize home-to-work transportation? No, the comfort and/or convenience of an...

  17. Maximizing Health or Sufficient Capability in Economic Evaluation? A Methodological Experiment of Treatment for Drug Addiction.

    PubMed

    Goranitis, Ilias; Coast, Joanna; Day, Ed; Copello, Alex; Freemantle, Nick; Frew, Emma

    2017-07-01

    Conventional practice within the United Kingdom and beyond is to conduct economic evaluations with "health" as evaluative space and "health maximization" as the decision-making rule. However, there is increasing recognition that this evaluative framework may not always be appropriate, and this is particularly the case within public health and social care contexts. This article presents a methodological case study designed to explore the impact of changing the evaluative space within an economic evaluation from health to capability well-being and the decision-making rule from health maximization to the maximization of sufficient capability. Capability well-being is an evaluative space grounded on Amartya Sen's capability approach and assesses well-being based on individuals' ability to do and be the things they value in life. Sufficient capability is an egalitarian approach to decision making that aims to ensure everyone in society achieves a normatively sufficient level of capability well-being. The case study is treatment for drug addiction, and the cost-effectiveness of 2 psychological interventions relative to usual care is assessed using data from a pilot trial. Analyses are undertaken from a health care and a government perspective. For the purpose of the study, quality-adjusted life years (measured using the EQ-5D-5L) and years of full capability equivalent and years of sufficient capability equivalent (both measured using the ICECAP-A [ICEpop CAPability measure for Adults]) are estimated. The study concludes that different evaluative spaces and decision-making rules have the potential to offer opposing treatment recommendations. The implications for policy makers are discussed.

  18. A Modeling Approach to Enhance Animal-Obtained Oceanographic Data Geo- Position

    NASA Astrophysics Data System (ADS)

    Tremblay, Y.; Robinson, P.; Weise, M. J.; Costa, D. P.

    2006-12-01

    Diving animals are increasingly being used as platforms to collect oceanographic data such as CTD profiles. Animal borne sensors provide an amazing amount of data that have to be spatially referenced. Because of technical limitations geo-position of these data mostly comes from the interpolation of locations obtained through the ARGOS positioning system. This system lacks spatio-temporal resolution compared to the Global Positioning System (GPS) and therefore, the positions of these oceanographic data are not well defined. A consequence of this is that many data collected in coastal regions are discarded, because many casts' records fell on land. Using modeling techniques, we propose a method to deal with this problem. The method is rather intuitive, and instead of deleting unreasonable or low-quality locations, it uses them by taking into account their lack of precision as a source of information. In a similar way, coastlines are used as sources of information, because marine animals do not travel over land. The method was evaluated using simultaneously obtained tracks with the Argos and GPS system. The tracks obtained from this method are considerably enhanced and allow a more accurate geo-reference of oceanographic data. In addition, the method provides a way to evaluate spatial errors for each cast that is not otherwise possible with classical filtering methods.

  19. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  20. Multidimensional analysis of data obtained in experiments with X-ray emulsion chambers and extensive air showers

    NASA Technical Reports Server (NTRS)

    Chilingaryan, A. A.; Galfayan, S. K.; Zazyan, M. Z.; Dunaevsky, A. M.

    1985-01-01

    Nonparametric statistical methods are used to carry out the quantitative comparison of the model and the experimental data. The same methods enable one to select the events initiated by the heavy nuclei and to calculate the portion of the corresponding events. For this purpose it is necessary to have the data on artificial events describing the experiment sufficiently well established. At present, the model with the small scaling violation in the fragmentation region is the closest to the experiments. Therefore, the treatment of gamma families obtained in the Pamir' experiment is being carried out at present with the application of these models.

  1. Reliability of the Colorado Family Support Assessment: A Self-Sufficiency Matrix for Families

    ERIC Educational Resources Information Center

    Richmond, Melissa K.; Pampel, Fred C.; Zarcula, Flavia; Howey, Virginia; McChesney, Brenda

    2017-01-01

    Purpose: Family support programs commonly use self-sufficiency matrices (SSMs) to measure family outcomes, however, validation research on SSMs is sparse. This study examined the reliability of the Colorado Family Support Assessment 2.0 (CFSA 2.0) to measure family self-reliance across 14 domains (e.g., employment). Methods: Ten written case…

  2. The criterion of subscale sufficiency and its application to the relationship between static capillary pressure, saturation and interfacial areas.

    PubMed

    Kurzeja, Patrick

    2016-05-01

    Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas.

  3. The criterion of subscale sufficiency and its application to the relationship between static capillary pressure, saturation and interfacial areas

    PubMed Central

    2016-01-01

    Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas. PMID:27279769

  4. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    NASA Astrophysics Data System (ADS)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various

  5. New optimization scheme to obtain interaction potentials for oxide glasses

    NASA Astrophysics Data System (ADS)

    Sundararaman, Siddharth; Huang, Liping; Ispas, Simona; Kob, Walter

    2018-05-01

    We propose a new scheme to parameterize effective potentials that can be used to simulate atomic systems such as oxide glasses. As input data for the optimization, we use the radial distribution functions of the liquid and the vibrational density of state of the glass, both obtained from ab initio simulations, as well as experimental data on the pressure dependence of the density of the glass. For the case of silica, we find that this new scheme facilitates finding pair potentials that are significantly more accurate than the previous ones even if the functional form is the same, thus demonstrating that even simple two-body potentials can be superior to more complex three-body potentials. We have tested the new potential by calculating the pressure dependence of the elastic moduli and found a good agreement with the corresponding experimental data.

  6. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    in the relative frame of reference; the boundary conditions for the computations were obtained from inlet flow measurements performed in the AFTRF. A complete turbine stage, including an NGV and a rotor row was simulated using the RANS solver with the SST kappa -- o turbulence model, with two different computational models for the interface between the rotating component and the stationary component. The first interface model, the circumferentially averaged mixing plane model, was solved for a fixed position of the rotor blades relative to the NGV in the stationary frame of reference. The information transferred between the NGV and rotor domains is obtained by averaging across the entire interface. The quasi-steady state flow characteristics of the AFTRF can be obtained from this interface model. After the model was validated with the existing experimental data, this model was not only used to investigate the flow characteristics in the turbine stage but also the effects of using pressure side rotor tip extensions. The tip leakage flow fields simulated from this model and from the linear cascade model show similar trends. More detailed understanding of unsteady characteristics of a turbine flow field can be obtained using the second type of interface model, the time accurate sliding mesh model. The potential flow interactions, wake characteristics, their effects on secondary flow formation, and the wake mixing process in a rotor passage were examined using this model. Furthermore, turbine stage efficiency and effects of tip clearance height on the turbine stage efficiency were also investigated. A comparison between the results from the circumferential average model and the time accurate flow model results is presented. It was found that the circumferential average model cannot accurately simulate flow interaction characteristics on the interface plane between the NGV trailing edge and the rotor leading edge. However, the circumferential average model does give

  7. Accurate Ray-tracing of Realistic Neutron Star Atmospheres for Constraining Their Parameters

    NASA Astrophysics Data System (ADS)

    Vincent, Frederic H.; Bejger, Michał; Różańska, Agata; Straub, Odele; Paumard, Thibaut; Fortin, Morgane; Madej, Jerzy; Majczyna, Agnieszka; Gourgoulhon, Eric; Haensel, Paweł; Zdunik, Leszek; Beldycki, Bartosz

    2018-03-01

    Thermal-dominated X-ray spectra of neutron stars in quiescent, transient X-ray binaries and neutron stars that undergo thermonuclear bursts are sensitive to mass and radius. The mass–radius relation of neutron stars depends on the equation of state (EoS) that governs their interior. Constraining this relation accurately is therefore of fundamental importance to understand the nature of dense matter. In this context, we introduce a pipeline to calculate realistic model spectra of rotating neutron stars with hydrogen and helium atmospheres. An arbitrarily fast-rotating neutron star with a given EoS generates the spacetime in which the atmosphere emits radiation. We use the LORENE/NROTSTAR code to compute the spacetime numerically and the ATM24 code to solve the radiative transfer equations self-consistently. Emerging specific intensity spectra are then ray-traced through the neutron star’s spacetime from the atmosphere to a distant observer with the GYOTO code. Here, we present and test our fully relativistic numerical pipeline. To discuss and illustrate the importance of realistic atmosphere models, we compare our model spectra to simpler models like the commonly used isotropic color-corrected blackbody emission. We highlight the importance of considering realistic model-atmosphere spectra together with relativistic ray-tracing to obtain accurate predictions. We also insist upon the crucial impact of the star’s rotation on the observables. Finally, we close a controversy that has been ongoing in the literature in the recent years, regarding the validity of the ATM24 code.

  8. Variable-pulse switching circuit accurately controls solenoid-valve actuations

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1967-01-01

    Solid state circuit generating adjustable square wave pulses of sufficient power operates a 28 volt dc solenoid valve at precise time intervals. This circuit is used for precise time control of fluid flow in combustion experiments.

  9. Virtual water and water self-sufficiency in agricultural and livestock products in Brazil.

    PubMed

    da Silva, Vicente de Paulo R; de Oliveira, Sonaly D; Braga, Célia C; Brito, José Ivaldo B; de Sousa, Francisco de Assis S; de Holanda, Romildo M; Campos, João Hugo B C; de Souza, Enio P; Braga, Armando César R; Rodrigues Almeida, Rafaela S; de Araújo, Lincoln E

    2016-12-15

    Virtual water trade is often considered a solution for restricted water availability in many regions of the world. Brazil is the world leader in the production and export of various agricultural and livestock products. The country is either a strong net importer or a strong net exporter of these products. The objective of this study is to determine the volume of virtual water contained in agricultural and livestock products imported/exported by Brazil from 1997 to 2012, and to define the water self-sufficiency index of agricultural and livestock products in Brazil. The indexes of water scarcity (WSI), water dependency (WDI) and water self-sufficiency (WSSI) were calculated for each Brazilian state. These indexes and the virtual water balance were calculated following the methodology developed by Chapagain and Hoekstra (2008) and Hoekstra and Hung (2005). The total water exports and imports embedded in agricultural and livestock products were 5.28 × 10 10 and 1.22 × 10 10  Gm 3  yr -1 , respectively, which results in positive virtual water balance of 4.05 × 10 10  Gm 3  yr -1 . Brazil is either a strong net importer or a strong net exporter of agricultural and livestock products among the Mercosur countries. Brazil has a positive virtual water balance of 1.85 × 10 10  Gm 3  yr -1 . The indexes used in this study reveal that Brazil is self-sufficient in food production, except for a few products such as wheat and rice. Horticultural products (tomato, onion, potato, cassava and garlic) make up a unique product group with negative virtual water balance in Brazil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  11. The calculation of accurate harmonic frequencies of large molecules: the polycyclic aromatic hydrocarbons, a case study

    NASA Astrophysics Data System (ADS)

    Bauschlicher, Charles W.; Langhoff, Stephen R.

    1997-07-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Møller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral polycyclic aromatic hydrocarbons (PAHs) where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  12. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  13. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X.

    PubMed

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A; Sondhi, S L

    2017-12-13

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition.This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'. © 2017 The Author(s).

  14. Accurate ensemble molecular dynamics binding free energy ranking of multidrug-resistant HIV-1 proteases.

    PubMed

    Sadiq, S Kashif; Wright, David W; Kenway, Owain A; Coveney, Peter V

    2010-05-24

    Accurate calculation of important thermodynamic properties, such as macromolecular binding free energies, is one of the principal goals of molecular dynamics simulations. However, single long simulation frequently produces incorrectly converged quantitative results due to inadequate sampling of conformational space in a feasible wall-clock time. Multiple short (ensemble) simulations have been shown to explore conformational space more effectively than single long simulations, but the two methods have not yet been thermodynamically compared. Here we show that, for end-state binding free energy determination methods, ensemble simulations exhibit significantly enhanced thermodynamic sampling over single long simulations and result in accurate and converged relative binding free energies that are reproducible to within 0.5 kcal/mol. Completely correct ranking is obtained for six HIV-1 protease variants bound to lopinavir with a correlation coefficient of 0.89 and a mean relative deviation from experiment of 0.9 kcal/mol. Multidrug resistance to lopinavir is enthalpically driven and increases through a decrease in the protein-ligand van der Waals interaction, principally due to the V82A/I84V mutation, and an increase in net electrostatic repulsion due to water-mediated disruption of protein-ligand interactions in the catalytic region. Furthermore, we correctly rank, to within 1 kcal/mol of experiment, the substantially increased chemical potency of lopinavir binding to the wild-type protease compared to saquinavir and show that lopinavir takes advantage of a decreased net electrostatic repulsion to confer enhanced binding. Our approach is dependent on the combined use of petascale computing resources and on an automated simulation workflow to attain the required level of sampling and turn around time to obtain the results, which can be as little as three days. This level of performance promotes integration of such methodology with clinical decision support systems for

  15. Geodetic results from ISAGEX data. [for obtaining center of mass coordinates for geodetic camera sites

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Douglas, B. C.; Walls, D. M.

    1974-01-01

    Laser and camera data taken during the International Satellite Geodesy Experiment (ISAGEX) were used in dynamical solutions to obtain center-of-mass coordinates for the Astro-Soviet camera sites at Helwan, Egypt, and Oulan Bator, Mongolia, as well as the East European camera sites at Potsdam, German Democratic Republic, and Ondrejov, Czechoslovakia. The results are accurate to about 20m in each coordinate. The orbit of PEOLE (i=15) was also determined from ISAGEX data. Mean Kepler elements suitable for geodynamic investigations are presented.

  16. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  17. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  18. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    PubMed

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  19. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  20. Memory conformity affects inaccurate memories more than accurate memories.

    PubMed

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  1. 75 FR 35712 - National Pollutant Discharge Elimination System (NPDES): Use of Sufficiently Sensitive Test...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-23

    ... Methods for Permit Applications and Reporting AGENCY: Environmental Protection Agency (EPA). ACTION... System (NPDES) program, only ``sufficiently sensitive'' analytical test methods can be used when... methods with respect to measurement of mercury and extend the approach outlined in that guidance to the...

  2. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  3. 40 CFR Appendix B to Part 61 - Test Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...

  4. 40 CFR Appendix B to Part 61 - Test Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...

  5. 40 CFR Appendix B to Part 61 - Test Methods

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...

  6. 40 CFR Appendix B to Part 61 - Test Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...

  7. 40 CFR Appendix B to Part 61 - Test Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...

  8. Preparation and Respirometric Assessment of Mitochondria Isolated from Skeletal Muscle Tissue Obtained by Percutaneous Needle Biopsy

    PubMed Central

    Bharadwaj, Manish S.; Tyrrell, Daniel J.; Lyles, Mary F.; Demons, Jamehl L.; Rogers, George W.; Molina, Anthony J. A.

    2015-01-01

    Respirometric profiling of isolated mitochondria is commonly used to investigate electron transport chain function. We describe a method for obtaining samples of human Vastus lateralis, isolating mitochondria from minimal amounts of skeletal muscle tissue, and plate based respirometric profiling using an extracellular flux (XF) analyzer. Comparison of respirometric profiles obtained using 1.0, 2.5 and 5.0 μg of mitochondria indicate that 1.0 μg is sufficient to measure respiration and that 5.0 μg provides most consistent results based on comparison of standard errors. Western blot analysis of isolated mitochondria for mitochondrial marker COX IV and non-mitochondrial tissue marker GAPDH indicate that there is limited non-mitochondrial contamination using this protocol. The ability to study mitochondrial respirometry in as little as 20 mg of muscle tissue allows users to utilize individual biopsies for multiple study endpoints in clinical research projects. PMID:25741892

  9. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  10. Combining energy and Laplacian regularization to accurately retrieve the depth of brain activity of diffuse optical tomographic data

    NASA Astrophysics Data System (ADS)

    Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Mathewson, Kyle E.; Fabiani, Monica; Gratton, Gabriele

    2016-03-01

    Diffuse optical tomography (DOT) provides data about brain function using surface recordings. Despite recent advancements, an unbiased method for estimating the depth of absorption changes and for providing an accurate three-dimensional (3-D) reconstruction remains elusive. DOT involves solving an ill-posed inverse problem, requiring additional criteria for finding unique solutions. The most commonly used criterion is energy minimization (energy constraint). However, as measurements are taken from only one side of the medium (the scalp) and sensitivity is greater at shallow depths, the energy constraint leads to solutions that tend to be small and superficial. To correct for this bias, we combine the energy constraint with another criterion, minimization of spatial derivatives (Laplacian constraint, also used in low resolution electromagnetic tomography, LORETA). Used in isolation, the Laplacian constraint leads to solutions that tend to be large and deep. Using simulated, phantom, and actual brain activation data, we show that combining these two criteria results in accurate (error <2 mm) absorption depth estimates, while maintaining a two-point spatial resolution of <24 mm up to a depth of 30 mm. This indicates that accurate 3-D reconstruction of brain activity up to 30 mm from the scalp can be obtained with DOT.

  11. Accurate determination of genetic identity for a single cacao bean, using molecular markers with a nanofluidic system, ensures cocoa authentication.

    PubMed

    Fang, Wanping; Meinhardt, Lyndel W; Mischke, Sue; Bellato, Cláudia M; Motilal, Lambert; Zhang, Dapeng

    2014-01-15

    Cacao (Theobroma cacao L.), the source of cocoa, is an economically important tropical crop. One problem with the premium cacao market is contamination with off-types adulterating raw premium material. Accurate determination of the genetic identity of single cacao beans is essential for ensuring cocoa authentication. Using nanofluidic single nucleotide polymorphism (SNP) genotyping with 48 SNP markers, we generated SNP fingerprints for small quantities of DNA extracted from the seed coat of single cacao beans. On the basis of the SNP profiles, we identified an assumed adulterant variety, which was unambiguously distinguished from the authentic beans by multilocus matching. Assignment tests based on both Bayesian clustering analysis and allele frequency clearly separated all 30 authentic samples from the non-authentic samples. Distance-based principle coordinate analysis further supported these results. The nanofluidic SNP protocol, together with forensic statistical tools, is sufficiently robust to establish authentication and to verify gourmet cacao varieties. This method shows significant potential for practical application.

  12. Accurate structure, thermodynamics and spectroscopy of medium-sized radicals by hybrid Coupled Cluster/Density Functional Theory approaches: the case of phenyl radical

    PubMed Central

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Egidi, Franco; Puzzarini, Cristina

    2015-01-01

    The CCSD(T) model coupled with extrapolation to the complete basis-set limit and additive approaches represents the “golden standard” for the structural and spectroscopic characterization of building blocks of biomolecules and nanosystems. However, when open-shell systems are considered, additional problems related to both specific computational difficulties and the need of obtaining spin-dependent properties appear. In this contribution, we present a comprehensive study of the molecular structure and spectroscopic (IR, Raman, EPR) properties of the phenyl radical with the aim of validating an accurate computational protocol able to deal with conjugated open-shell species. We succeeded in obtaining reliable and accurate results, thus confirming and, partly, extending the available experimental data. The main issue to be pointed out is the need of going beyond the CCSD(T) level by including a full treatment of triple excitations in order to fulfil the accuracy requirements. On the other hand, the reliability of density functional theory in properly treating open-shell systems has been further confirmed. PMID:23802956

  13. Anatomical brain images alone can accurately diagnose chronic neuropsychiatric illnesses.

    PubMed

    Bansal, Ravi; Staib, Lawrence H; Laine, Andrew F; Hao, Xuejun; Xu, Dongrong; Liu, Jun; Weissman, Myrna; Peterson, Bradley S

    2012-01-01

    Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders. Extensions of these methods are likely to provide biomarkers

  14. ADC Mothers Reach Self-Sufficiency through Comprehensive Support and Family Development Services Program.

    ERIC Educational Resources Information Center

    Randolph, Gayle C., II; McCarthy, Karen V.

    Families whose primary or sole means of financial support is derived from the welfare system are attempting to meet immediate survival needs in the same manner as families outside of the system. Project Self-Sufficiency is a program which dedicates time to building trusting relationships based on mutual respect and the belief that, with support,…

  15. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  16. Retrocausal Effects as a Consequence of Quantum Mechanics Refined to Accommodate the Principle of Sufficient Reason

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, Henry P.

    2011-05-10

    The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determinedmore » by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.« less

  17. Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy.

    PubMed

    Doblas, Ana; Sánchez-Ortiga, Emilio; Martínez-Corral, Manuel; Saavedra, Genaro; Garcia-Sucerquia, Jorge

    2014-04-01

    The advantages of using a telecentric imaging system in digital holographic microscopy (DHM) to study biological specimens are highlighted. To this end, the performances of nontelecentric DHM and telecentric DHM are evaluated from the quantitative phase imaging (QPI) point of view. The evaluated stability of the microscope allows single-shot QPI in DHM by using telecentric imaging systems. Quantitative phase maps of a section of the head of the drosophila melanogaster fly and of red blood cells are obtained via single-shot DHM with no numerical postprocessing. With these maps we show that the use of telecentric DHM provides larger field of view for a given magnification and permits more accurate QPI measurements with less number of computational operations.

  18. Accurate and facile determination of the index of refraction of organic thin films near the carbon 1s absorption edge.

    PubMed

    Yan, Hongping; Wang, Cheng; McCarn, Allison R; Ade, Harald

    2013-04-26

    A practical and accurate method to obtain the index of refraction, especially the decrement δ, across the carbon 1s absorption edge is demonstrated. The combination of absorption spectra scaled to the Henke atomic scattering factor database, the use of the doubly subtractive Kramers-Kronig relations, and high precision specular reflectivity measurements from thin films allow the notoriously difficult-to-measure δ to be determined with high accuracy. No independent knowledge of the film thickness or density is required. High confidence interpolation between relatively sparse measurements of δ across an absorption edge is achieved. Accurate optical constants determined by this method are expected to greatly improve the simulation and interpretation of resonant soft x-ray scattering and reflectivity data. The method is demonstrated using poly(methyl methacrylate) and should be extendable to all organic materials.

  19. Optimizing Methods of Obtaining Stellar Parameters for the H3 Survey

    NASA Astrophysics Data System (ADS)

    Ivory, KeShawn; Conroy, Charlie; Cargile, Phillip

    2018-01-01

    The Stellar Halo at High Resolution with Hectochelle Survey (H3) is in the process of observing and collecting stellar parameters for stars in the Milky Way's halo. With a goal of measuring radial velocities for fainter stars, it is crucial that we have optimal methods of obtaining this and other parameters from the data from these stars.The method currently developed is The Payne, named after Cecilia Payne-Gaposchkin, a code that uses neural networks and Markov Chain Monte Carlo methods to utilize both spectra and photometry to obtain values for stellar parameters. This project was to investigate the benefit of fitting both spectra and spectral energy distributions (SED). Mock spectra using the parameters of the Sun were created and noise was inserted at various signal to noise values. The Payne then fit each mock spectrum with and without a mock SED also generated from solar parameters. The result was that at high signal to noise, the spectrum dominated and the effect of fitting the SED was minimal. But at low signal to noise, the addition of the SED greatly decreased the standard deviation of the data and resulted in more accurate values for temperature and metallicity.

  20. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  1. Analyzing the impact of price subsidy on rice self-sufficiency level in Malaysia: A preliminary finding

    NASA Astrophysics Data System (ADS)

    Rahim, Farah Hanim Abdul; Abidin, Norhaslinda Zainal; Hawari, Nurul Nazihah

    2017-11-01

    The Malaysian government had targeted for the rice industry in the country to achieve 100% rice self-sufficiency where Malaysia's rice self-sufficiency level (SSL) is currently at 65% to 75%. Thus, the government had implemented few policies to increase the rice production in Malaysia in order to meet the growing demand of rice. In this paper, the effect of price support on the rice production system in Malaysia is investigated. This study utilizes the system dynamics approach of the rice production system in Malaysia where the complexity of the factor is interrelated and changed dynamically through time. Scenario analysis was conducted using system dynamics model by making changes on the price subsidy to see its effect on the rice production and rice SSL. The system dynamics model provides a framework for understanding the effect of price subsidy on the rice self-sufficiency level. The scenario analysis of the model shows that a 50% increase in the price subsidy leads to a substantial increase in demand as the rice price drops. Accordingly, the local production increases by 15%. However, the SSL slightly decreases as the local production is insufficient to meet the large demand.

  2. Temperature dependent effective potential method for accurate free energy calculations of solids

    NASA Astrophysics Data System (ADS)

    Hellman, Olle; Steneteg, Peter; Abrikosov, I. A.; Simak, S. I.

    2013-03-01

    We have developed a thorough and accurate method of determining anharmonic free energies, the temperature dependent effective potential technique (TDEP). It is based on ab initio molecular dynamics followed by a mapping onto a model Hamiltonian that describes the lattice dynamics. The formalism and the numerical aspects of the technique are described in detail. A number of practical examples are given, and results are presented, which confirm the usefulness of TDEP within ab initio and classical molecular dynamics frameworks. In particular, we examine from first principles the behavior of force constants upon the dynamical stabilization of the body centered phase of Zr, and show that they become more localized. We also calculate the phase diagram for 4He modeled with the Aziz potential and obtain results which are in favorable agreement both with respect to experiment and established techniques.

  3. Using stereophotogrammetric technology for obtaining intraoral digital impressions of implants.

    PubMed

    Pradíes, Guillermo; Ferreiroa, Alberto; Özcan, Mutlu; Giménez, Beatriz; Martínez-Rus, Francisco

    2014-04-01

    The procedure for making impressions of multiple implants continues to be a challenge, despite the various techniques proposed to date. The authors' objective in this case report is to describe a novel digital impression method for multiple implants involving the use of stereophotogrammetric technology. The authors present three cases of patients who had multiple implants in which the impressions were obtained with this technology. Initially, a stereo camera with an infrared flash detects the position of special flag abutments screwed into the implants. This process is based on registering the x, y and z coordinates of each implant and the distances between them. This information is converted into a stereolithographic (STL) file. To add the soft-tissue information, the user must obtain another STL file by using an intraoral or extraoral scanner. In the first case presented, this information was acquired from the plaster model with an extraoral scanner; in the second case, from a Digital Imaging and Communication in Medicine (DICOM) file of the plaster model obtained with cone-beam computed tomography; and in the third case, through an intraoral digital impression with a confocal scanner. In the three cases, the frameworks manufactured from this technique showed a correct clinical passive fit. At follow-up appointments held six, 12 and 24 months after insertion of the prosthesis, no complications were reported. Stereophotogrammetric technology is a viable, accurate and easy technique for making multiple implant impressions. Clinicians can use stereophotogrammetric technology to acquire reliable digital master models as a first step in producing frameworks with a correct passive fit.

  4. Faculty Sufficiency and AACSB Accreditation Compliance within a Global University: A Mathematical Modeling Approach

    ERIC Educational Resources Information Center

    Boronico, Jess; Murdy, Jim; Kong, Xinlu

    2014-01-01

    This manuscript proposes a mathematical model to address faculty sufficiency requirements towards assuring overall high quality management education at a global university. Constraining elements include full-time faculty coverage by discipline, location, and program, across multiple campus locations subject to stated service quality standards of…

  5. Towards the comprehensive, rapid, and accurate prediction of the favorable tautomeric states of drug-like molecules in aqueous solution

    NASA Astrophysics Data System (ADS)

    Greenwood, Jeremy R.; Calkins, David; Sullivan, Arron P.; Shelley, John C.

    2010-06-01

    Generating the appropriate protonation states of drug-like molecules in solution is important for success in both ligand- and structure-based virtual screening. Screening collections of millions of compounds requires a method for determining tautomers and their energies that is sufficiently rapid, accurate, and comprehensive. To maximise enrichment, the lowest energy tautomers must be determined from heterogeneous input, without over-enumerating unfavourable states. While computationally expensive, the density functional theory (DFT) method M06-2X/aug-cc-pVTZ(-f) [PB-SCRF] provides accurate energies for enumerated model tautomeric systems. The empirical Hammett-Taft methodology can very rapidly extrapolate substituent effects from model systems to drug-like molecules via the relationship between pKT and pKa. Combining the two complementary approaches transforms the tautomer problem from a scientific challenge to one of engineering scale-up, and avoids issues that arise due to the very limited number of measured pKT values, especially for the complicated heterocycles often favoured by medicinal chemists for their novelty and versatility. Several hundreds of pre-calculated tautomer energies and substituent pKa effects are tabulated in databases for use in structural adjustment by the program Epik, which treats tautomers as a subset of the larger problem of the protonation states in aqueous ensembles and their energy penalties. Accuracy and coverage is continually improved and expanded by parameterizing new systems of interest using DFT and experimental data. Recommendations are made for how to best incorporate tautomers in molecular design and virtual screening workflows.

  6. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  7. Joint profiling of greenhouse gases, isotopes, thermodynamic variables, and wind from space by combined microwave and IR laser occultation: the ACCURATE concept

    NASA Astrophysics Data System (ADS)

    Kirchengast, G.; Schweitzer, S.

    2008-12-01

    The ACCURATE (Atmospheric Climate and Chemistry in the UTLS Region And climate Trends Explorer) mission was conceived at the Wegener Center in late 2004 and subsequently proposed in 2005 by an international team of more than 20 scientific partners from more than 12 countries to an ESA selection process for next Earth Explorer Missions. While the mission was not selected for formal pre-phase A study, it received very positive evaluation and was recommended for further development and demonstration. ACCURATE employs the occultation measurement principle, known for its unique combination of high vertical resolution, accuracy and long-term stability, in a novel way. It systematically combines use of highly stable signals in the MW 17-23/178-196 GHz bands (LEO-LEO MW crosslink occultation) with laser signals in the SWIR 2-2.5 μm band (LEO-LEO IR laser crosslink occultation) for exploring and monitoring climate and chemistry in the atmosphere with focus on the UTLS region (upper troposphere/lower stratosphere, 5-35 km). The MW occultation is an advanced and at the same time compact version of the LEO-LEO MW occultation concept, studied in 2002-2004 for the ACE+ mission project of ESA for frequencies including the 17-23 GHz band, complemented by U.S. study heritage for frequencies including the 178-196 GHz bands (R. Kursinski et al., Univ. of Arizona, Tucson). The core of ACCURATE is tight synergy of the IR laser crosslinks with the MW crosslinks. The observed parameters, obtained simultaneously and in a self-calibrated manner based on Doppler shift and differential log-transmission profiles, comprise the fundamental thermodynamic variables of the atmosphere (temperature, pressure/geopotential height, humidity) retrieved from the MW bands, complemented by line-of-sight wind, six greenhouse gases (GHGs) and key species of UTLS chemistry (H2O, CO2, CH4, N2O, O3, CO) and four CO2 and H2O isotopes (HDO, H218O, 13CO2, C18OO) from the SWIR band. Furthermore, profiles of

  8. Wringing the last drop of optically stimulated luminescence response for accurate dating of glacial sediments

    NASA Astrophysics Data System (ADS)

    Medialdea, Alicia; Bateman, Mark D.; Evans, David J.; Roberts, David H.; Chiverrell, Richard C.; Clark, Chris D.

    2017-04-01

    BRITICE-CHRONO is a NERC-funded consortium project of more than 40 researchers aiming to establish the retreat patterns of the last British and Irish Ice Sheet. For this purpose, optically stimulated luminescence (OSL) dating, among other dating techniques, has been used in order to establish accurate chronology. More than 150 samples from glacial environments have been dated and provide key information for modelling of the ice retreat. Nevertheless, luminescence dating of glacial sediments has proven to be challenging: first, glacial sediments were often affected by incomplete bleaching and secondly, quartz grains within the sediments sampled were often characterized by complex luminescence behaviour; characterized by dim signal and low reproducibility. Specific statistical approaches have been used to over come the former to enable the estimated ages to be based on grain populations most likely to have been well bleached. This latest work presents how issues surrounding complex luminescence behaviour were over-come in order to obtain accurate OSL ages. This study has been performed on two samples of bedded sand originated on an ice walled lake plain, in Lincolnshire, UK. Quartz extracts from each sample were artificially bleached and irradiated to known doses. Dose recovery tests have been carried out under different conditions to study the effect of: preheat temperature, thermal quenching, contribution of slow components, hot bleach after a measuring cycles and IR stimulation. Measurements have been performed on different luminescence readers to study the possible contribution of instrument reproducibility. These have shown that a great variability can be observed not only among the studied samples but also within a specific site and even a specific sample. In order to determine an accurate chronology and realistic uncertainties to the estimated ages, this variability must be taken into account. Tight acceptance criteria to measured doses from natural, not

  9. In-vitro evaluation of the accuracy of conventional and digital methods of obtaining full-arch dental impressions.

    PubMed

    Ender, Andreas; Mehl, Albert

    2015-01-01

    To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.

  10. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  11. Accurate formula for dissipative interaction in frequency modulation atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Kazuhiro; Matsushige, Kazumi; Yamada, Hirofumi

    2014-12-08

    Much interest has recently focused on the viscosity of nano-confined liquids. Frequency modulation atomic force microscopy (FM-AFM) is a powerful technique that can detect variations in the conservative and dissipative forces between a nanometer-scale tip and a sample surface. We now present an accurate formula to convert the dissipation power of the cantilever measured during the experiment to damping of the tip-sample system. We demonstrated the conversion of the dissipation power versus tip-sample separation curve measured using a colloidal probe cantilever on a mica surface in water to the damping curve, which showed a good agreement with the theoretical curve.more » Moreover, we obtained the damping curve from the dissipation power curve measured on the hydration layers on the mica surface using a nanometer-scale tip, demonstrating that the formula allows us to quantitatively measure the viscosity of a nano-confined liquid using FM-AFM.« less

  12. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, Glyn; Horton, Keith A.; Elias, Tamar; Garbeil, Harold; Mouginis-Mark, Peter J; Sutton, A. Jeff; Harris, Andrew J. L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Kīlauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s−1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements.

  13. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  14. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  15. Accurate Measurements of Aircraft Engine Soot Emissions Using a CAPS PMssa Monitor

    NASA Astrophysics Data System (ADS)

    Onasch, Timothy; Thompson, Kevin; Renbaum-Wolff, Lindsay; Smallwood, Greg; Make-Lye, Richard; Freedman, Andrew

    2016-04-01

    We present results of aircraft engine soot emissions measurements during the VARIAnT2 campaign using CAPS PMssa monitors. VARIAnT2, an aircraft engine non-volatile particulate matter (nvPM) emissions field campaign, was focused on understanding the variability in nvPM mass measurements using different measurement techniques and accounting for possible nvPM sampling system losses. The CAPS PMssa monitor accurately measures both the optical extinction and scattering (and thus single scattering albedo and absorption) of an extracted sample using the same sample volume for both measurements with a time resolution of 1 second and sensitivity of better than 1 Mm-1. Absorption is obtained by subtracting the scattering signal from the total extinction. Given that the single scattering albedo of the particulates emitted from the aircraft engine measured at both 630 and 660 nm was on the order of 0.1, any inaccuracy in the scattering measurement has little impact on the accuracy of the ddetermined absorption coefficient. The absorption is converted into nvPM mass using a documented Mass Absorption Coefficient (MAC). Results of soot emission indices (mass soot emitted per mass of fuel consumed) for a turbojet engine as a function of engine power will be presented and compared to results obtained using an EC/OC monitor.

  16. Early nutritional support and physiotherapy improved long-term self-sufficiency in acutely ill older patients.

    PubMed

    Hegerová, Petra; Dědková, Zuzana; Sobotka, Luboš

    2015-01-01

    An acute disease is regularly associated with inflammation, decreased food intake, and low physical activity; the consequence is loss of muscle mass. However, the restoration of muscle tissue is problematic, especially in older patients. Loss of muscle mass leads to further decrease of physical activity which leads, together with recurring disease, to the progressive muscle mass loss accompanied by loss of self-sufficiency. Early nutrition support and physical activity could reverse this situation. Therefore, the aim of this study was to determine whether an active approach based on early nutritional therapy and exercise would influence the development of sarcopenia and impaired self-sufficiency during acute illness. Two hundred patients >78 y were admitted to a hospital internal medicine department and participated in a prospective, randomized controlled study. The patients were randomized to a control group receiving standard treatment (n = 100) or to an intervention group (n = 100). The intervention consisted of nutritional supplements (600 kcal, 20 g/d protein) added to a standard diet and a simultaneous intensive rehabilitation program. The tolerance of supplements and their influence on spontaneous food intake, self-sufficiency, muscle strength, and body composition were evaluated during the study period. The patients were then regularly monitored for 1 y post-discharge. The provision of nutritional supplements together with early rehabilitation led to increased total energy and protein intake while the intake of standard hospital food was not reduced. The loss of lean body mass and a decrease in self-sufficiency were apparent at discharge from the hospital and 3 mo thereafter in the control group. Nutritional supplementation and the rehabilitation program in the study group prevented these alterations. A positive effect of nutritional intervention and exercise during the hospital stay was apparent at 6 mo post-discharge. The early nutritional intervention

  17. Accurate isotopic fission yields of electromagnetically induced fission of 238U measured in inverse kinematics at relativistic energies

    NASA Astrophysics Data System (ADS)

    Pellereau, E.; Taïeb, J.; Chatillon, A.; Alvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Benlliure, J.; Boutoux, G.; Caamaño, M.; Casarejos, E.; Cortina-Gil, D.; Ebran, A.; Farget, F.; Fernández-Domínguez, B.; Gorbinet, T.; Grente, L.; Heinz, A.; Johansson, H.; Jurado, B.; Kelić-Heil, A.; Kurz, N.; Laurent, B.; Martin, J.-F.; Nociforo, C.; Paradela, C.; Pietri, S.; Rodríguez-Sánchez, J. L.; Schmidt, K.-H.; Simon, H.; Tassan-Got, L.; Vargas, J.; Voss, B.; Weick, H.

    2017-05-01

    SOFIA (Studies On Fission with Aladin) is a novel experimental program, dedicated to accurate measurements of fission-fragment isotopic yields. The setup allows us to fully identify, in nuclear charge and mass, both fission fragments in coincidence for the whole fission-fragment range. It was installed at the GSI facility (Darmstadt), to benefit from the relativistic heavy-ion beams available there, and thus to use inverse kinematics. This paper reports on fission yields obtained in electromagnetically induced fission of 238U.

  18. Obtaining a Dry Extract from the Mikania laevigata Leaves with Potential for Antiulcer Activity

    PubMed Central

    Pinto, Mariana Viana; Oliveira, Ezequiane Machado; Martins, Jose Luiz Rodrigues; de Paula, Jose Realino; Costa, Elson Alves; da Conceição, Edemilson Cardoso; Bara, Maria Teresa Freitas

    2017-01-01

    Background: Mikania laevigata leaves are commonly used in Brazil as a medicinal plant. Objective: To obtain hydroalcoholic dried extract by nebulization and evaluate its antiulcerogenic potential. Materials and Methods: Plant material and hydroalcoholic extract were processed and analyzed for their physicochemical characteristics. A method using HPLC was validated to quantify coumarin and o-coumaric acid. Hydroalcoholic extract was spray dried and the powder obtained was characterized in terms of its physicochemical parameters and potential for antiulcerogenic activity. Results: The analytical method proved to be selective, linear, precise, accurate, sensitive, and robust. M. laevigata spray dried extract was obtained using colloidal silicon dioxide as adjuvant and was shown to possess 1.83 ± 0.004% coumarin and 0.80 ± 0.012% o-coumaric acid. It showed significant antiulcer activity in a model of an indomethacin-induced gastric lesion in mice and also produced a gastroprotective effect. Conclusion: This dried extract from M. laevigata could be a promising intermediate phytopharmaceutical product. SUMMARY Research and development of standardized dried extract of Mikania laevigata leaves obtained through spray drying and the production process was monitored by the chemical profile, physicochemical properties and potential for anti-ulcerogenic activity. Abbreviations used: DE: M. laevigata spray dried extract, HE: hydroalcoholic extract. PMID:28216886

  19. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  20. 49 CFR 40.275 - What is the effect of procedural problems that are not sufficient to cancel an alcohol test?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... not sufficient to cancel an alcohol test? 40.275 Section 40.275 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.275 What is the effect of procedural problems that are not sufficient to...