Science.gov

Sample records for accurate detailed information

  1. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  2. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  3. 17 CFR 240.17Ad-10 - Prompt posting of certificate detail to master securityholder files, maintenance of accurate...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 4 2014-04-01 2014-04-01 false Prompt posting of certificate detail to master securityholder files, maintenance of accurate securityholder files, communications between co-transfer agents and recordkeeping transfer agents, maintenance of current control book, retention of certificate detail and âbuy-inâ...

  4. 17 CFR 240.17Ad-10 - Prompt posting of certificate detail to master securityholder files, maintenance of accurate...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 3 2012-04-01 2012-04-01 false Prompt posting of certificate detail to master securityholder files, maintenance of accurate securityholder files, communications between co-transfer agents and recordkeeping transfer agents, maintenance of current control book, retention of certificate detail and âbuy-inâ...

  5. DETAIL OF PLAQUE WITH ADDITIONAL DESIGN AND CONSTRUCTION INFORMATION, SOUTHEAST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL OF PLAQUE WITH ADDITIONAL DESIGN AND CONSTRUCTION INFORMATION, SOUTHEAST ABUTMENT - Connecticut Avenue Bridge, Spans Rock Creek & Potomac Parkway at Connecticut Avenue, Washington, District of Columbia, DC

  6. 17 CFR 240.17Ad-10 - Prompt posting of certificate detail to master securityholder files, maintenance of accurate...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... between co-transfer agents and recordkeeping transfer agents, maintenance of current control book... securityholder files, maintenance of accurate securityholder files, communications between co-transfer agents and... certificate detail from transfer journals received by the recordkeeping transfer agent from a...

  7. 17 CFR 240.17Ad-10 - Prompt posting of certificate detail to master securityholder files, maintenance of accurate...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... between co-transfer agents and recordkeeping transfer agents, maintenance of current control book... securityholder files, maintenance of accurate securityholder files, communications between co-transfer agents and... certificate detail from transfer journals received by the recordkeeping transfer agent from a...

  8. 17 CFR 240.17Ad-10 - Prompt posting of certificate detail to master securityholder files, maintenance of accurate...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... between co-transfer agents and recordkeeping transfer agents, maintenance of current control book... securityholder files, maintenance of accurate securityholder files, communications between co-transfer agents and... certificate detail from transfer journals received by the recordkeeping transfer agent from a...

  9. A review of the kinetic detail required for accurate predictions of normal shock waves

    NASA Technical Reports Server (NTRS)

    Muntz, E. P.; Erwin, Daniel A.; Pham-Van-diep, Gerald C.

    1991-01-01

    Several aspects of the kinetic models used in the collision phase of Monte Carlo direct simulations have been studied. Accurate molecular velocity distribution function predictions require a significantly increased number of computational cells in one maximum slope shock thickness, compared to predictions of macroscopic properties. The shape of the highly repulsive portion of the interatomic potential for argon is not well modeled by conventional interatomic potentials; this portion of the potential controls high Mach number shock thickness predictions, indicating that the specification of the energetic repulsive portion of interatomic or intermolecular potentials must be chosen with care for correct modeling of nonequilibrium flows at high temperatures. It has been shown for inverse power potentials that the assumption of variable hard sphere scattering provides accurate predictions of the macroscopic properties in shock waves, by comparison with simulations in which differential scattering is employed in the collision phase. On the other hand, velocity distribution functions are not well predicted by the variable hard sphere scattering model for softer potentials at higher Mach numbers.

  10. Detailed and Highly Accurate 3d Models of High Mountain Areas by the Macs-Himalaya Aerial Camera Platform

    NASA Astrophysics Data System (ADS)

    Brauchle, J.; Hein, D.; Berger, R.

    2015-04-01

    Remote sensing in areas with extreme altitude differences is particularly challenging. In high mountain areas specifically, steep slopes result in reduced ground pixel resolution and degraded quality in the DEM. Exceptionally high brightness differences can in part no longer be imaged by the sensors. Nevertheless, detailed information about mountainous regions is highly relevant: time and again glacier lake outburst floods (GLOFs) and debris avalanches claim dozens of victims. Glaciers are sensitive to climate change and must be carefully monitored. Very detailed and accurate 3D maps provide a basic tool for the analysis of natural hazards and the monitoring of glacier surfaces in high mountain areas. There is a gap here, because the desired accuracies are often not achieved. It is for this reason that the DLR Institute of Optical Sensor Systems has developed a new aerial camera, the MACS-Himalaya. The measuring unit comprises four camera modules with an overall aperture angle of 116° perpendicular to the direction of flight. A High Dynamic Range (HDR) mode was introduced so that within a scene, bright areas such as sun-flooded snow and dark areas such as shaded stone can be imaged. In 2014, a measuring survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at heights up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced. Special challenges and gaps in the investigation of high mountain areas, approaches for resolution of these problems, the camera system and the state of evaluation are presented with examples.

  11. 20 CFR 422.604 - Request for detailed information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 422.604 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ORGANIZATION AND PROCEDURES Administrative Review Process Under the Coal Industry Retiree Health Benefit Act of 1992 § 422.604 Request for... whom you have premium responsibility, you may request detailed information as to the work histories...

  12. 19. DETAIL OF AIR FORCE WEATHER INFORMATION TERMINAL AND CHART ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. DETAIL OF AIR FORCE WEATHER INFORMATION TERMINAL AND CHART RECORDER LOCATED IMMEDIATELY NORTH OF CONSOLE IN PHOTOS A-15 THROUGH A-18. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  13. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  14. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  15. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration.

    PubMed

    Saenz, Daniel L; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu's method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms. PMID:27494827

  16. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  17. The Good, the Strong, and the Accurate: Preschoolers' Evaluations of Informant Attributes

    ERIC Educational Resources Information Center

    Fusaro, Maria; Corriveau, Kathleen H.; Harris, Paul L.

    2011-01-01

    Much recent evidence shows that preschoolers are sensitive to the accuracy of an informant. Faced with two informants, one of whom names familiar objects accurately and the other inaccurately, preschoolers subsequently prefer to learn the names and functions of unfamiliar objects from the more accurate informant. This study examined the inference…

  18. 10 CFR 9.19 - Segregation of exempt information and deletion of identifying details.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Segregation of exempt information and deletion of identifying details. 9.19 Section 9.19 Energy NUCLEAR REGULATORY COMMISSION PUBLIC RECORDS Freedom of Information Act Regulations § 9.19 Segregation of exempt information and deletion of identifying details....

  19. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Reasonable Steps Commission Will Take To Assure Information It Discloses Is Accurate, and That Disclosure Is..., injuries or safety-related incidents involving such a product. Such persons would include, for example, a... set forth below are the steps the Commission will take to analyze the accuracy of information which...

  20. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Reasonable Steps Commission Will Take To Assure Information It Discloses Is Accurate, and That Disclosure Is..., injuries or safety-related incidents involving such a product. Such persons would include, for example, a... set forth below are the steps the Commission will take to analyze the accuracy of information which...

  1. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Reasonable Steps Commission Will Take To Assure Information It Discloses Is Accurate, and That Disclosure Is..., injuries or safety-related incidents involving such a product. Such persons would include, for example, a... set forth below are the steps the Commission will take to analyze the accuracy of information which...

  2. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Reasonable Steps Commission Will Take To Assure Information It Discloses Is Accurate, and That Disclosure Is..., injuries or safety-related incidents involving such a product. Such persons would include, for example, a... set forth below are the steps the Commission will take to analyze the accuracy of information which...

  3. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Reasonable steps to assure information is accurate. 1101.32 Section 1101.32 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS INFORMATION DISCLOSURE UNDER SECTION 6(b) OF THE CONSUMER PRODUCT SAFETY ACT Reasonable Steps Commission Will Take...

  4. 78 FR 29439 - Proposed Information Collection (Request for Details of Expenses) Activity: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-20

    ... From the Federal Register Online via the Government Publishing Office ] DEPARTMENT OF VETERANS AFFAIRS Proposed Information Collection (Request for Details of Expenses) Activity: Comment Request AGENCY...: Request for Details of Expenses, VA Form 21-8049. OMB Control Number: 2900-0138. Type of Review:...

  5. 75 FR 45205 - Proposed Information Collection (Request for Details of Expenses) Activity: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-02

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF VETERANS AFFAIRS Proposed Information Collection (Request for Details of Expenses) Activity: Comment Request AGENCY.... Title: Request for Details of Expenses, VA Form 21-8049. OMB Control Number: 2900-0138. Type of...

  6. How Iron-Containing Proteins Control Dioxygen Chemistry: A Detailed Atomic Level Description Via Accurate Quantum Chemical and Mixed Quantum Mechanics/Molecular Mechanics Calculations.

    SciTech Connect

    Friesner, Richard A.; Baik, Mu-Hyun; Gherman, Benjamin F.; Guallar, Victor; Wirstam, Maria E.; Murphy, Robert B.; Lippard, Stephen J.

    2003-03-01

    Over the past several years, rapid advances in computational hardware, quantum chemical methods, and mixed quantum mechanics/molecular mechanics (QM/MM) techniques have made it possible to model accurately the interaction of ligands with metal-containing proteins at an atomic level of detail. In this paper, we describe the application of our computational methodology, based on density functional (DFT) quantum chemical methods, to two diiron-containing proteins that interact with dioxygen: methane monooxygenase (MMO) and hemerythrin (Hr). Although the active sites are structurally related, the biological function differs substantially. MMO is an enzyme found in methanotrophic bacteria and hydroxylates aliphatic C-H bonds, whereas Hr is a carrier protein for dioxygen used by a number of marine invertebrates. Quantitative descriptions of the structures and energetics of key intermediates and transition states involved in the reaction with dioxygen are provided, allowing their mechanisms to be compared and contrasted in detail. An in-depth understanding of how the chemical identity of the first ligand coordination shell, structural features, electrostatic and van der Waals interactions of more distant shells control ligand binding and reactive chemistry is provided, affording a systematic analysis of how iron-containing proteins process dioxygen. Extensive contact with experiment is made in both systems, and a remarkable degree of accuracy and robustness of the calculations is obtained from both a qualitative and quantitative perspective.

  7. Restoring Detailed Geomagnetic and Environmental Information from Continuous Sediment Paleomagnetic Measurement through Optimised Deconvolution

    NASA Astrophysics Data System (ADS)

    Xuan, C.; Oda, H.

    2013-12-01

    The development of pass-through cryogenic magnetometers has greatly improved our efficiency in collecting paleomagnetic and rock magnetic data from continuous samples such as sediment half-core sections and u-channels. During a pass-through measurement, the magnetometer sensor response inevitably convolves with remanence of the continuous sample. The convolution process results in smoothed measurement and can seriously distort the paleomagnetic signal due to differences in sensor response along different measurement axes. Previous studies have demonstrated that deconvolution can effectively overcome the convolution effect of sensor response and improve the resolution for continuous paleomagnetic data. However, the lack of an easy-to-use deconvolution tool and the difficulty in accurately measuring the magnetometer sensor response have greatly hindered the application of deconvolution. Here, we acquire reliable estimate of sensor response of a pass-through cryogenic magnetometer at the Oregon State University by integrating repeated measurements of a magnetic point source. The point source is fixed in the center of a well-shaped polycarbonate cube with 5 mm edge length, and measured at every 1 mm position along a 40-cm interval while placing the polycarbonate cube at each of the 5 × 5 grid positions over a 2 × 2 cm2 area on the cross section. The acquired sensor response reveals that cross terms (i.e. response of pick-up coil for one axis to magnetic signal along other axes) that were often omitted in previous deconvolution practices are clearly not negligible. Utilizing the detailed estimate of magnetometer sensor response, we present UDECON, a graphical tool for convenient application of optimised deconvolution based on Akaike's Bayesian Information Criterion (ABIC) minimization (Oda and Shibuya, 1996). UDECON directly reads a paleomagnetic measurement file, and allows user to view, compare, and save data before and after deconvolution. Optimised deconvolution

  8. Psychophysiological and behavioral measures for detecting concealed information: the role of memory for crime details.

    PubMed

    Nahari, Galit; Ben-Shakhar, Gershon

    2011-06-01

    This study examined the role of memory for crime details in detecting concealed information using the electrodermal measure, Symptom Validity Test, and Number Guessing Test. Participants were randomly assigned to three groups: guilty, who committed a mock theft; informed-innocents, who were exposed to crime-relevant items; and uninformed-innocents, who had no crime-relevant information. Participants were tested immediately or 1 week later. Results showed (a) all tests detected the guilty in the immediate condition, and combining the tests improved detection efficiency; (b) tests' efficiency declined in the delayed condition, mainly for peripheral details; (c) no distinction between guilty and informed innocents was possible in the immediate, yet some distinction emerged in the delayed condition. These findings suggest that, while time delay may somewhat reduce the ability to detect the guilty, it also diminishes the danger of accusing informed-innocents. PMID:20958308

  9. PC-based Multiple Information System Interface (PC/MISI) detailed design and implementation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Hall, Philip P.

    1985-01-01

    The design plan for the personal computer multiple information system interface (PC/MISI) project is discussed. The document is intended to be used as a blueprint for the implementation of the system. Each component is described in the detail necessary to allow programmers to implement the system. A description of the system data flow and system file structures is given.

  10. Cas9-chromatin binding information enables more accurate CRISPR off-target prediction

    PubMed Central

    Singh, Ritambhara; Kuscu, Cem; Quinlan, Aaron; Qi, Yanjun; Adli, Mazhar

    2015-01-01

    The CRISPR system has become a powerful biological tool with a wide range of applications. However, improving targeting specificity and accurately predicting potential off-targets remains a significant goal. Here, we introduce a web-based CRISPR/Cas9 Off-target Prediction and Identification Tool (CROP-IT) that performs improved off-target binding and cleavage site predictions. Unlike existing prediction programs that solely use DNA sequence information; CROP-IT integrates whole genome level biological information from existing Cas9 binding and cleavage data sets. Utilizing whole-genome chromatin state information from 125 human cell types further enhances its computational prediction power. Comparative analyses on experimentally validated datasets show that CROP-IT outperforms existing computational algorithms in predicting both Cas9 binding as well as cleavage sites. With a user-friendly web-interface, CROP-IT outputs scored and ranked list of potential off-targets that enables improved guide RNA design and more accurate prediction of Cas9 binding or cleavage sites. PMID:26032770

  11. Accurate protein structure modeling using sparse NMR data and homologous structure information

    PubMed Central

    Thompson, James M.; Sgourakis, Nikolaos G.; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L.; Szyperski, Thomas; Montelione, Gaetano T.; Baker, David

    2012-01-01

    While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining , 13C, and 15N backbone and 13Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2–1.9 Å relative to the conventional determined NMR ensembles and of 0.9–1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments. PMID:22665781

  12. The Basingstoke Orthopaedic Database: a high quality accurate information system for audit.

    PubMed

    Barlow, I W; Flynn, N A; Britton, J M

    1994-11-01

    The accuracy of a computerised audit system custom produced for the Orthopaedic Department has been validated by comparison with operating theatre records and patients' case notes. The study revealed only 2.5 per cent missed entries; of the recorded entries information regarding the nature of the operation was found to be 92.5 per cent complete and 98 per cent accurate. The high percentage accuracy reflects the high degree of medical input in operation of the system. The Basingstoke Orthopaedic Database is flexible, cheap and easy to maintain. Data is stored in a form that is readily applicable to standard software packages. PMID:7598401

  13. The utility of accurate mass and LC elution time information in the analysis of complex proteomes

    SciTech Connect

    Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.

    2005-08-01

    Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.

  14. Providing community-based health practitioners with timely and accurate discharge medicines information

    PubMed Central

    2012-01-01

    Background Accurate and timely medication information at the point of discharge is essential for continuity of care. There are scarce data on the clinical significance if poor quality medicines information is passed to the next episode of care. This study aimed to compare the number and clinical significance of medication errors and omission in discharge medicines information, and the timeliness of delivery of this information to community-based health practitioners, between the existing Hospital Discharge Summary (HDS) and a pharmacist prepared Medicines Information Transfer Fax (MITF). Method The study used a sample of 80 hospital patients who were at high risk of medication misadventure, and who had a MITF completed in the study period June – October 2009 at a tertiary referral hospital. The medicines information in participating patients’ MITFs was validated against their Discharge Prescriptions (DP). Medicines information in each patient’s HDS was then compared with their validated MITF. An expert clinical panel reviewed identified medication errors and omissions to determine their clinical significance. The time between patient discharge and the dispatching of the MITF and the HDS to each patient’s community-based practitioners was calculated from hospital records. Results DPs for 77 of the 80 patients were available for comparison with their MITFs. Medicines information in 71 (92%) of the MITFs matched that of the DP. Comparison of the HDS against the MITF revealed that no HDS was prepared for 16 (21%) patients. Of the remaining 61 patients; 33 (54%), had required medications omitted and 38 (62%) had medication errors in their HDS. The Clinical Panel rated the significance of errors or omissions for 70 patients (16 with no HDS prepared and 54 who’s HDS was inconsistent with the validated MITF). In 17 patients the error or omission was rated as insignificant to minor; 23 minor to moderate; 24 moderate to major and 6 major to catastrophic. 28 (35

  15. Quantitatively Mapping Cellular Viscosity with Detailed Organelle Information via a Designed PET Fluorescent Probe

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Liu, Xiaogang; Spring, David R.; Qian, Xuhong; Cui, Jingnan; Xu, Zhaochao

    2014-06-01

    Viscosity is a fundamental physical parameter that influences diffusion in biological processes. The distribution of intracellular viscosity is highly heterogeneous, and it is challenging to obtain a full map of cellular viscosity with detailed organelle information. In this work, we report 1 as the first fluorescent viscosity probe which is able to quantitatively map cellular viscosity with detailed organelle information based on the PET mechanism. This probe exhibited a significant ratiometric fluorescence intensity enhancement as solvent viscosity increases. The emission intensity increase was attributed to combined effects of the inhibition of PET due to restricted conformational access (favorable for FRET, but not for PET), and the decreased PET efficiency caused by viscosity-dependent twisted intramolecular charge transfer (TICT). A full map of subcellular viscosity was successfully constructed via fluorescent ratiometric detection and fluorescence lifetime imaging; it was found that lysosomal regions in a cell possess the highest viscosity, followed by mitochondrial regions.

  16. Quantitatively Mapping Cellular Viscosity with Detailed Organelle Information via a Designed PET Fluorescent Probe

    PubMed Central

    Liu, Tianyu; Liu, Xiaogang; Spring, David R.; Qian, Xuhong; Cui, Jingnan; Xu, Zhaochao

    2014-01-01

    Viscosity is a fundamental physical parameter that influences diffusion in biological processes. The distribution of intracellular viscosity is highly heterogeneous, and it is challenging to obtain a full map of cellular viscosity with detailed organelle information. In this work, we report 1 as the first fluorescent viscosity probe which is able to quantitatively map cellular viscosity with detailed organelle information based on the PET mechanism. This probe exhibited a significant ratiometric fluorescence intensity enhancement as solvent viscosity increases. The emission intensity increase was attributed to combined effects of the inhibition of PET due to restricted conformational access (favorable for FRET, but not for PET), and the decreased PET efficiency caused by viscosity-dependent twisted intramolecular charge transfer (TICT). A full map of subcellular viscosity was successfully constructed via fluorescent ratiometric detection and fluorescence lifetime imaging; it was found that lysosomal regions in a cell possess the highest viscosity, followed by mitochondrial regions. PMID:24957323

  17. Deciding what information is necessary: do patients with advanced cancer want to know all the details?

    PubMed Central

    Russell, Bethany J; Ward, Alicia M

    2011-01-01

    Communicating effectively with patients who have advanced cancer is one of the greatest challenges facing physicians today. Whilst guiding the patient through complex diagnostic and staging techniques, treatment regimens and trials, the physician must translate often imprecise or conflicting data into meaningful personalized information that empowers the patient to make decisions about their life and body. This requires understanding, compassion, patience, and skill. This narrative literature review explores current communication practices, information preferences of oncology patients and their families, and communication strategies that may assist in these delicate interactions. Overwhelmingly, the literature suggests that whilst the majority of patients with advanced cancer do want to know their diagnosis and receive detailed prognostic information, this varies not only between individuals but also for a given individual over time. Barriers to the delivery and understanding of information exist on both sides of the physician–patient relationship, and family dynamics are also influential. Despite identifiable trends, the information preferences of a particular patient cannot be reliably predicted by demographic, cultural, or cancer-specific factors. Therefore, our primary recommendation is that the physician regularly asks the patient what information they would like to know, who else should be given the information and be involved in decision making, and how that information should be presented. PMID:21792328

  18. Conditional mutual inclusive information enables accurate quantification of associations in gene regulatory networks.

    PubMed

    Zhang, Xiujun; Zhao, Juan; Hao, Jin-Kao; Zhao, Xing-Ming; Chen, Luonan

    2015-03-11

    Mutual information (MI), a quantity describing the nonlinear dependence between two random variables, has been widely used to construct gene regulatory networks (GRNs). Despite its good performance, MI cannot separate the direct regulations from indirect ones among genes. Although the conditional mutual information (CMI) is able to identify the direct regulations, it generally underestimates the regulation strength, i.e. it may result in false negatives when inferring gene regulations. In this work, to overcome the problems, we propose a novel concept, namely conditional mutual inclusive information (CMI2), to describe the regulations between genes. Furthermore, with CMI2, we develop a new approach, namely CMI2NI (CMI2-based network inference), for reverse-engineering GRNs. In CMI2NI, CMI2 is used to quantify the mutual information between two genes given a third one through calculating the Kullback-Leibler divergence between the postulated distributions of including and excluding the edge between the two genes. The benchmark results on the GRNs from DREAM challenge as well as the SOS DNA repair network in Escherichia coli demonstrate the superior performance of CMI2NI. Specifically, even for gene expression data with small sample size, CMI2NI can not only infer the correct topology of the regulation networks but also accurately quantify the regulation strength between genes. As a case study, CMI2NI was also used to reconstruct cancer-specific GRNs using gene expression data from The Cancer Genome Atlas (TCGA). CMI2NI is freely accessible at http://www.comp-sysbio.org/cmi2ni. PMID:25539927

  19. Conditional mutual inclusive information enables accurate quantification of associations in gene regulatory networks

    PubMed Central

    Zhang, Xiujun; Zhao, Juan; Hao, Jin-Kao; Zhao, Xing-Ming; Chen, Luonan

    2015-01-01

    Mutual information (MI), a quantity describing the nonlinear dependence between two random variables, has been widely used to construct gene regulatory networks (GRNs). Despite its good performance, MI cannot separate the direct regulations from indirect ones among genes. Although the conditional mutual information (CMI) is able to identify the direct regulations, it generally underestimates the regulation strength, i.e. it may result in false negatives when inferring gene regulations. In this work, to overcome the problems, we propose a novel concept, namely conditional mutual inclusive information (CMI2), to describe the regulations between genes. Furthermore, with CMI2, we develop a new approach, namely CMI2NI (CMI2-based network inference), for reverse-engineering GRNs. In CMI2NI, CMI2 is used to quantify the mutual information between two genes given a third one through calculating the Kullback–Leibler divergence between the postulated distributions of including and excluding the edge between the two genes. The benchmark results on the GRNs from DREAM challenge as well as the SOS DNA repair network in Escherichia coli demonstrate the superior performance of CMI2NI. Specifically, even for gene expression data with small sample size, CMI2NI can not only infer the correct topology of the regulation networks but also accurately quantify the regulation strength between genes. As a case study, CMI2NI was also used to reconstruct cancer-specific GRNs using gene expression data from The Cancer Genome Atlas (TCGA). CMI2NI is freely accessible at http://www.comp-sysbio.org/cmi2ni. PMID:25539927

  20. Fracture Network Characteristics Informed by Detailed Studies of Chlorinated Solvent Plumes in Sedimentary Rock Aquifers

    NASA Astrophysics Data System (ADS)

    Parker, B. L.; Chapman, S.

    2015-12-01

    Various numerical approaches have been used to simulate contaminant plumes in fractured porous rock, but the one that allows field and laboratory measurements to be most directly used as inputs to these models is the Discrete Fracture Network (DFN) Approach. To effectively account for fracture-matrix interactions, emphasis must be placed on identifying and parameterizing all of the fractures that participate substantially in groundwater flow and contaminated transport. High resolution plume studies at four primary research sites, where chlorinated solvent plumes serve as long-term (several decades) tracer tests, provide insight concerning the density of the fracture network unattainable by conventional methods. Datasets include contaminant profiles from detailed VOC subsampling informed by continuous core logs, hydraulic head and transmissivity profiles, packer testing and sensitive temperature logging methods in FLUTe™ lined holes. These show presence of many more transmissive fractures, contrasting observations of only a few flow zones per borehole obtained from conventional hydraulic tests including flow metering in open boreholes. Incorporating many more fractures with a wider range of transmissivities is key to predicting contaminant migration. This new understanding of dense fracture networks combined with matrix property measurements have informed 2-D DFN flow and transport modelling using Fractran and HydroGeosphere to simulate plume characteristics ground-truthed by detailed field site plume characterization. These process-based simulations corroborate field findings that plumes in sedimentary rock after decades of transport show limited plume front distances and strong internal plume attenuation by diffusion, transverse dispersion and slow degradation. This successful application of DFN modeling informed by field-derived parameters demonstrates how the DFN Approach can be applied to other sites to inform plume migration rates and remedial efficacy.

  1. Alignment of capillary electrophoresis-mass spectrometry datasets using accurate mass information.

    PubMed

    Nevedomskaya, Ekaterina; Derks, Rico; Deelder, André M; Mayboroda, Oleg A; Palmblad, Magnus

    2009-12-01

    Capillary electrophoresis-mass spectrometry (CE-MS) is a powerful technique for the analysis of small soluble compounds in biological fluids. A major drawback of CE is the poor migration time reproducibility, which makes it difficult to combine data from different experiments and correctly assign compounds. A number of alignment algorithms have been developed but not all of them can cope with large and irregular time shifts between CE-MS runs. Here we present a genetic algorithm designed for alignment of CE-MS data using accurate mass information. The utility of the algorithm was demonstrated on real data, and the results were compared with one of the existing packages. The new algorithm showed a significant reduction of elution time variation in the aligned datasets. The importance of mass accuracy for the performance of the algorithm was also demonstrated by comparing alignments of datasets from a standard time-of-flight (TOF) instrument with those from the new ultrahigh resolution TOF maXis (Bruker Daltonics). PMID:19826795

  2. Learning the Structure of High-Dimensional Manifolds with Self-Organizing Maps for Accurate Information Extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Lili

    This work aims to improve the capability of accurate information extraction from high-dimensional data, with a specific neural learning paradigm, the Self-Organizing Map (SOM). The SOM is an unsupervised learning algorithm that can faithfully sense the manifold structure and support supervised learning of relevant information from the data. Yet open problems regarding SOM learning exist. We focus on the following two issues. (1) Evaluation of topology preservation. Topology preservation is essential for SOMs in faithful representation of manifold structure. However, in reality, topology violations are not unusual, especially when the data have complicated structure. Measures capable of accurately quantifying and informatively expressing topology violations are lacking. One contribution of this work is a new measure, the Weighted Differential Topographic Function (WDTF), which differentiates an existing measure, the Topographic Function (TF), and incorporates detailed data distribution as an importance weighting of violations to distinguish severe violations from insignificant ones. Another contribution is an interactive visual tool, TopoView, which facilitates the visual inspection of violations on the SOM lattice. We show the effectiveness of the combined use of the WDTF and TopoView through a simple two-dimensional data set and two hyperspectral images. (2) Learning multiple latent variables from high-dimensional data. We use an existing two-layer SOM-hybrid supervised architecture, which captures the manifold structure in its SOM hidden layer, and then, uses its output layer to perform the supervised learning of latent variables. In the customary way, the output layer only uses the strongest output of the SOM neurons. This severely limits the learning capability. We allow multiple, k, strongest responses of the SOM neurons for the supervised learning. Moreover, the fact that different latent variables can be best learned with different values of k motivates a

  3. 28 CFR 5.210 - Amount of detail required in information relating to registrant's activities and expenditures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Amount of detail required in information relating to registrant's activities and expenditures. 5.210 Section 5.210 Judicial Administration... § 5.210 Amount of detail required in information relating to registrant's activities and...

  4. Detailed Clinical Models: Representing Knowledge, Data and Semantics in Healthcare Information Technology

    PubMed Central

    2014-01-01

    Objectives This paper will present an overview of the developmental effort in harmonizing clinical knowledge modeling using the Detailed Clinical Models (DCMs), and will explain how it can contribute to the preservation of Electronic Health Records (EHR) data. Methods Clinical knowledge modeling is vital for the management and preservation of EHR and data. Such modeling provides common data elements and terminology binding with the intention of capturing and managing clinical information over time and location independent from technology. Any EHR data exchange without an agreed clinical knowledge modeling will potentially result in loss of information. Results Many attempts exist from the past to model clinical knowledge for the benefits of semantic interoperability using standardized data representation and common terminologies. The objective of each project is similar with respect to consistent representation of clinical data, using standardized terminologies, and an overall logical approach. However, the conceptual, logical, and the technical expressions are quite different in one clinical knowledge modeling approach versus another. There currently are synergies under the Clinical Information Modeling Initiative (CIMI) in order to create a harmonized reference model for clinical knowledge models. Conclusions The goal for the CIMI is to create a reference model and formalisms based on for instance the DCM (ISO/TS 13972), among other work. A global repository of DCMs may potentially be established in the future. PMID:25152829

  5. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  6. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  7. Mitigation of Bias in Inversion of Complex Earthquake without Prior Information of Detailed Fault Geometry

    NASA Astrophysics Data System (ADS)

    Kasahara, A.; Yagi, Y.

    2014-12-01

    Rupture process of earthquake derived from geophysical observations is important information to understand nature of earthquake and assess seismic hazard. Finite fault inversion is a commonly applied method to construct seismic source model. In conventional inversion, fault is approximated by a simple fault surface even if rupture of real earthquake should propagate along non-planar complex fault. In the conventional inversion, complex rupture kinematics is approximated by limited model parameters that only represent slip on a simple fault surface. This over simplification may cause biased and hence misleading solution. MW 7.7 left-lateral strike-slip earthquake occurred in southwestern Pakistan on 2013-09-24 might be one of exemplar event to demonstrate the bias. For this earthquake, northeastward rupture propagation was suggested by a finite fault inversion of teleseismic body and long period surface waves with a single planer fault (USGS). However, surface displacement field measured from cross-correlation of optical satellite images and back-projection imaging revealed that rupture was unilaterally propagated toward southwest on a non-planer fault (Avouac et.al., 2014). To mitigate the bias, more flexible source parameterization should be employed. We extended multi-time window finite fault method to represent rupture kinematics on a complex fault. Each spatio-temporal knot has five degrees of freedom and is able to represent arbitrary strike, dip, rake, moment release rate and CLVD component. Detailed fault geometry for a source fault is not required in our method. The method considers data covariance matrix with uncertainty of Green's function (Yagi and Fukahata, 2011) to obtain stable solution. Preliminary results show southwestward rupture propagation and focal mechanism change that is consistent with fault trace. The result suggests usefulness of the flexible source parameterization for inversion of complex events.

  8. Towards a first detailed reconstruction of sunspot information over the last 150 years

    NASA Astrophysics Data System (ADS)

    Lefevre, Laure; Clette, Frédéric

    2013-04-01

    With four centuries of solar evolution, the International Sunspot Number (SSN) forms the longest solar time series currently available. It provides an essential reference for understanding and quantifying how the solar output has varied over decades and centuries and thus for assessing the variations of the main natural forcing on the Earth climate. For such a quantitative use, this unique time-series must be closely monitored for any possible biases and drifts. This is the main objective of the Sunspot Workshops organized jointly by the National Solar Observatory (NSO) and the Royal Observatory of Belgium (ROB) since 2010. Here, we will report about some recent outcomes of past workshops, like diagnostics of scaling errors and their proposed corrections, or the recent disagreement between the sunspot sumber and other solar indices like the 10.7cm radio flux. Our most recent analyses indicate that while part of this divergence may be due to a calibration drift in the SSN, it also results from an intrinsic change in the global magnetic parameters of sunspots and solar active regions, suggesting a possible transition to a new activity regime. Going beyond the SSN series, in the framework of the SOTERIA, TOSCA and SOLID projects, we produced a survey of all existing catalogs providing detailed sunspot information and we also located different primary solar images and drawing collections that can be exploitable to complement the existing catalogs (COMESEP project). These are first steps towards the construction of a multi-parametric time series of multiple sunspot group properties over at least the last 150 years, allowing to reconstruct and extend the current 1-D SSN series. By bringing new spatial, morphological and evolutionary information, such a data set should bring major advances for the modeling of the solar dynamo and solar irradiance. We will present here the current status of this work. The catalog now extends over the last 3 cycles (Lefevre & Clette 2011

  9. The Sunspot Number and beyond : reconstructing detailed solar information over centuries

    NASA Astrophysics Data System (ADS)

    Lefevre, L.

    2014-12-01

    With four centuries of solar evolution, the International Sunspot Number (SSN) forms the longest solar time series currently available. It provides an essential reference for understanding and quantifying how the solar output has varied over decades and centuries and thus for assessing the variations of the main natural forcing on the Earth climate. Because of its importance, this unique time-series must be closely monitored for any possible biases and drifts. Here, we report about recent disagreements between solar indices, for example the sunspot sumber and the 10.7cm radio flux. Recent analyses indicate that while part of this divergence may be due to a calibration drift in the SSN, it also results from an intrinsic change in the global magnetic parameters of sunspots and solar active regions, suggesting a possible transition to a new activity regime. Going beyond the SSN series, in the framework of the TOSCA (www.cost-tosca.eu/) and SOLID (projects.pmodwrc.ch/solid/) projects, we produced a survey of all existing catalogs providing detailed sunspot information (Lefevre & Clette, 2014:10.1007/s11207-012-0184-5) and we also located different primary solar images and drawing collections that can be exploitable to complement the existing catalogs. These are first steps towards the construction of a multi-parametric time series of multiple sunspot and sunspot group properties over more than a century, allowing to reconstruct and extend the current 1-D SSN series. By bringing new spatial, morphological and evolutionary information, such a data set should bring major advances for the modeling of the solar dynamo and solar irradiance. We will present here the current status of this work. The preliminary version catalog now extends over the last 150 years. It makes use of data from DPD (http://fenyi.solarobs.unideb.hu/DPD/index.html), from the Uccle Solar Equatorial Table (USET:http://sidc.oma.be/uset/) operated by the Royal Obeservatory of Belgium, the Greenwich

  10. Accurate refinement of docked protein complexes using evolutionary information and deep learning.

    PubMed

    Akbal-Delibas, Bahar; Farhoodi, Roshanak; Pomplun, Marc; Haspel, Nurit

    2016-06-01

    One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein-protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures. PMID:26846813

  11. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    ERIC Educational Resources Information Center

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  12. Polyallelic structural variants can provide accurate, highly informative genetic markers focused on diagnosis and therapeutic targets: Accuracy vs. Precision.

    PubMed

    Roses, A D

    2016-02-01

    Structural variants (SVs) include all insertions, deletions, and rearrangements in the genome, with several common types of nucleotide repeats including single sequence repeats, short tandem repeats, and insertion-deletion length variants. Polyallelic SVs provide highly informative markers for association studies with well-phenotyped cohorts. SVs can influence gene regulation by affecting epigenetics, transcription, splicing, and/or translation. Accurate assays of polyallelic SV loci are required to define the range and allele frequency of variable length alleles. PMID:26517180

  13. Exploratory Movement Generates Higher-Order Information That Is Sufficient for Accurate Perception of Scaled Egocentric Distance

    PubMed Central

    Mantel, Bruno; Stoffregen, Thomas A.; Campbell, Alain; Bardy, Benoît G.

    2015-01-01

    Body movement influences the structure of multiple forms of ambient energy, including optics and gravito-inertial force. Some researchers have argued that egocentric distance is derived from inferential integration of visual and non-visual stimulation. We suggest that accurate information about egocentric distance exists in perceptual stimulation as higher-order patterns that extend across optics and inertia. We formalize a pattern that specifies the egocentric distance of a stationary object across higher-order relations between optics and inertia. This higher-order parameter is created by self-generated movement of the perceiver in inertial space relative to the illuminated environment. For this reason, we placed minimal restrictions on the exploratory movements of our participants. We asked whether humans can detect and use the information available in this higher-order pattern. Participants judged whether a virtual object was within reach. We manipulated relations between body movement and the ambient structure of optics and inertia. Judgments were precise and accurate when the higher-order optical-inertial parameter was available. When only optic flow was available, judgments were poor. Our results reveal that participants perceived egocentric distance from the higher-order, optical-inertial consequences of their own exploratory activity. Analysis of participants’ movement trajectories revealed that self-selected movements were complex, and tended to optimize availability of the optical-inertial pattern that specifies egocentric distance. We argue that accurate information about egocentric distance exists in higher-order patterns of ambient energy, that self-generated movement can generate these higher-order patterns, and that these patterns can be detected and used to support perception of egocentric distance that is precise and accurate. PMID:25856410

  14. Information Systems Security and Computer Crime in the IS Curriculum: A Detailed Examination

    ERIC Educational Resources Information Center

    Foltz, C. Bryan; Renwick, Janet S.

    2011-01-01

    The authors examined the extent to which information systems (IS) security and computer crime are covered in information systems programs. Results suggest that IS faculty believe security coverage should be increased in required, elective, and non-IS courses. However, respondent faculty members are concerned that existing curricula leave little…

  15. QuShape: Rapid, accurate, and best-practices quantification of nucleic acid probing information, resolved by capillary electrophoresis

    PubMed Central

    Karabiber, Fethullah; McGinnis, Jennifer L.; Favorov, Oleg V.; Weeks, Kevin M.

    2013-01-01

    Chemical probing of RNA and DNA structure is a widely used and highly informative approach for examining nucleic acid structure and for evaluating interactions with protein and small-molecule ligands. Use of capillary electrophoresis to analyze chemical probing experiments yields hundreds of nucleotides of information per experiment and can be performed on automated instruments. Extraction of the information from capillary electrophoresis electropherograms is a computationally intensive multistep analytical process, and no current software provides rapid, automated, and accurate data analysis. To overcome this bottleneck, we developed a platform-independent, user-friendly software package, QuShape, that yields quantitatively accurate nucleotide reactivity information with minimal user supervision. QuShape incorporates newly developed algorithms for signal decay correction, alignment of time-varying signals within and across capillaries and relative to the RNA nucleotide sequence, and signal scaling across channels or experiments. An analysis-by-reference option enables multiple, related experiments to be fully analyzed in minutes. We illustrate the usefulness and robustness of QuShape by analysis of RNA SHAPE (selective 2′-hydroxyl acylation analyzed by primer extension) experiments. PMID:23188808

  16. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction

    PubMed Central

    Braun, Tatjana; Koehler Leman, Julia; Lange, Oliver F.

    2015-01-01

    Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs. PMID:26713437

  17. 75 FR 62186 - Agency Information Collection (Request for Details of Expenses) Activity Under OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... submission describes the nature of the information collection and its expected cost and burden; it includes... life insurance received in order to calculate ] the current rate of pension. Pension is an...

  18. 78 FR 53507 - Agency Information Collection (Request for Details of Expenses) Activity Under OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... submission describes the nature of the information collection and its expected cost and burden; it includes... expenses paid by the claimant and/or commercial life insurance received in order to calculate the...

  19. Accurately decoding visual information from fMRI data obtained in a realistic virtual environment

    PubMed Central

    Floren, Andrew; Naylor, Bruce; Miikkulainen, Risto; Ress, David

    2015-01-01

    Three-dimensional interactive virtual environments (VEs) are a powerful tool for brain-imaging based cognitive neuroscience that are presently under-utilized. This paper presents machine-learning based methods for identifying brain states induced by realistic VEs with improved accuracy as well as the capability for mapping their spatial topography on the neocortex. VEs provide the ability to study the brain under conditions closer to the environment in which humans evolved, and thus to probe deeper into the complexities of human cognition. As a test case, we designed a stimulus to reflect a military combat situation in the Middle East, motivated by the potential of using real-time functional magnetic resonance imaging (fMRI) in the treatment of post-traumatic stress disorder. Each subject experienced moving through the virtual town where they encountered 1–6 animated combatants at different locations, while fMRI data was collected. To analyze the data from what is, compared to most studies, more complex and less controlled stimuli, we employed statistical machine learning in the form of Multi-Voxel Pattern Analysis (MVPA) with special attention given to artificial Neural Networks (NN). Extensions to NN that exploit the block structure of the stimulus were developed to improve the accuracy of the classification, achieving performances from 58 to 93% (chance was 16.7%) with six subjects. This demonstrates that MVPA can decode a complex cognitive state, viewing a number of characters, in a dynamic virtual environment. To better understand the source of this information in the brain, a novel form of sensitivity analysis was developed to use NN to quantify the degree to which each voxel contributed to classification. Compared with maps produced by general linear models and the searchlight approach, these sensitivity maps revealed a more diverse pattern of information relevant to the classification of cognitive state. PMID:26106315

  20. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512

  1. Detailed requirements document for common software of shuttle program information management system

    NASA Technical Reports Server (NTRS)

    Everette, J. M.; Bradfield, L. D.; Horton, C. L.

    1975-01-01

    Common software was investigated as a method for minimizing development and maintenance cost of the shuttle program information management system (SPIMS) applications while reducing the time-frame of their development. Those requirements satisfying these criteria are presented along with the stand-alone modules which may be used directly by applications. The SPIMS applications operating on the CYBER 74 computer, are specialized information management systems which use System 2000 as a data base manager. Common software provides the features to support user interactions on a CRT terminal using form input and command response capabilities. These features are available as subroutines to the applications.

  2. 78 FR 42796 - 30-Day Notice of Proposed Information Collection: HUD Standard Grant Application Forms: Detailed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-17

    ...-424), and the Third-Party Documentation Facsimile Transmittal Form (HUD- 96011) AGENCY: Office of the...- Party Documentation Facsimile Transmittal Form (HUD-96011). OMB Approval Number: 2501-0017. Type of... Documentation Facsimile Transmittal Form allows the Department to collect the same information electronically...

  3. A Tale of Two Course Guides: Providing Students with Detailed Course Information

    ERIC Educational Resources Information Center

    Hanson, Karen; Williamson, Kasi

    2010-01-01

    Where do students find out about courses they might take? Potentially, from just about anywhere: friends, bulletin boards, department Web sites, advisors, e-mails, or flyers posted in the halls. Of course, some of these sources are more trustworthy than others. Where should students go to get reliable information that can help them make wise…

  4. Detailed design specification for the ALT Shuttle Information Extraction Subsystem (SIES)

    NASA Technical Reports Server (NTRS)

    Clouette, G. L.; Fitzpatrick, W. N.

    1976-01-01

    The approach and landing test (ALT) shuttle information extraction system (SIES) is described in terms of general requirements and system characteristics output products and processing options, output products and data sources, and system data flow. The ALT SIES is a data reduction system designed to satisfy certain data processing requirements for the ALT phase of the space shuttle program. The specific ALT SIES data processing requirements are stated in the data reduction complex approach and landing test data processing requirements. In general, ALT SIES must produce time correlated data products as a result of standardized data reduction or special purpose analytical processes. The main characteristics of ALT SIES are: (1) the system operates in a batch (non-interactive) mode; (2) the processing is table driven; (3) it is data base oriented; (4) it has simple operating procedures; and (5) it requires a minimum of run time information.

  5. Informed Consent for Interventional Radiology Procedures: A Survey Detailing Current European Practice

    SciTech Connect

    O'Dwyer, H.M.; Lyon, S.M.; Fotheringham, T.; Lee, M.J.

    2003-09-15

    Purpose: Official recommendations for obtaining informed consent for interventional radiology procedures are that the patient gives their consent to the operator more than 24 hr prior to the procedure. This has significant implications for interventional radiology practice. The purpose of this study was to identify the proportion of European interventional radiologists who conform to these guidelines. Methods: A questionnaire was designed consisting of 12 questions on current working practice and opinions regarding informed consent. These questions related to where, when and by whom consent was obtained from the patient. Questions also related to the use of formal consent forms and written patient information leaflets. Respondents were asked whether they felt patients received adequate explanation regarding indications for intervention,the procedure, alternative treatment options and complications. The questionnaire was distributed to 786 European interventional radiologists who were members of interventional societies. The anonymous replies were then entered into a database and analyzed. Results: Two hundred and fifty-four (32.3%) questionnaires were returned. Institutions were classified as academic (56.7%),non-academic (40.5%) or private (2.8%). Depending on the procedure,in a significant proportion of patients consent was obtained in the outpatient department (22%), on the ward (65%) and in the radiology day case ward (25%), but in over half (56%) of patients consent or re-consent was obtained in the interventional suite. Fifty percent of respondents indicated that they obtain consent more than 24 hr before some procedures, in 42.9% consent is obtained on the morning of the procedure and 48.8% indicated that in some patients consent is obtained immediately before the procedure. We found that junior medical staff obtained consent in 58% of cases. Eighty-two percent of respondents do not use specific consent forms and 61% have patient information leaflets. The

  6. Can Detailed Mapping of Subglacial Pillow Lavas Inform our Understanding of Pillow-dominated Submarine Eruptions?

    NASA Astrophysics Data System (ADS)

    Pollock, M.; Edwards, B. R.; Bowman, L.; Was, E.; Alcorn, R.; Hauksdottir, S.

    2012-12-01

    Submarine pillows comprise a significant portion of the upper oceanic crust, yet the emplacement dynamics of pillow lavas are not well understood. For example, recent studies suggest that submarine pillow-dominated eruptions may be accompanied by explosive episodes, even at very deep (>3 km) depths (e.g., Sohn et al., 2008). Because these deep-water deposits are not easily examined in detail, we are studying the eruptive processes that control submarine pillow lavas by investigating their subglacial counterparts. Although the two environments have some physical differences, the main difficulty in comparing subglacial and submarine pillows primarily derives from the limited number of detailed studies on glaciovolcanic pillows. Here, we describe observations from Undirhlíthar, a quarry on the Reykjanes Peninsula in southwest Iceland that exposes an almost complete cross-section of a subglacial pillow ridge. Undirhlíthar's glaciovolcanic deposits reflect a complex sequence of multiple eruptive and intrusive events. The south and east walls are dominated by interbedded pillow and pillow-breccia units (Lp2 and Lp3) that consist of pillow basalts ranging in size from 0.5 m to 3 m in diameter. Lp3 is overlain by two pillow units (Lp4 and Lp5); Lp4 is clearly fed by a dike (D-3) and is separated from Lp3 by a lens of vitric tuff-breccia (LT3). The lowermost pillow unit (Lp1) occurs on the western end of the south wall, where it is separated from Lp2 by a layer of stratified vitric tuff-breccia. Two olivine phyric dikes (D-1 and D-2) cut units Lp1 and Lp2. The west wall is dominated by a distinct pillow unit (Lpw) that bears large (up to 1 cm) olivine phenocrysts. Compositionally, the Undirhlíthar units define two trace element populations: (1) incompatible element enriched (LaN/SmN ~1.6; Nb/Zr ~0.15) rocks comprising the lower (older) pillow units (Lp1-Lp3), and (2) less enriched (LaN/SmN ~1.3; Nb/Zr ~0.125) units including dikes (D-1 to D-3), west wall pillows (Lpw

  7. SnowyOwl: accurate prediction of fungal genes by using RNA-Seq and homology information to select among ab initio models

    PubMed Central

    2014-01-01

    Background Locating the protein-coding genes in novel genomes is essential to understanding and exploiting the genomic information but it is still difficult to accurately predict all the genes. The recent availability of detailed information about transcript structure from high-throughput sequencing of messenger RNA (RNA-Seq) delineates many expressed genes and promises increased accuracy in gene prediction. Computational gene predictors have been intensively developed for and tested in well-studied animal genomes. Hundreds of fungal genomes are now or will soon be sequenced. The differences of fungal genomes from animal genomes and the phylogenetic sparsity of well-studied fungi call for gene-prediction tools tailored to them. Results SnowyOwl is a new gene prediction pipeline that uses RNA-Seq data to train and provide hints for the generation of Hidden Markov Model (HMM)-based gene predictions and to evaluate the resulting models. The pipeline has been developed and streamlined by comparing its predictions to manually curated gene models in three fungal genomes and validated against the high-quality gene annotation of Neurospora crassa; SnowyOwl predicted N. crassa genes with 83% sensitivity and 65% specificity. SnowyOwl gains sensitivity by repeatedly running the HMM gene predictor Augustus with varied input parameters and selectivity by choosing the models with best homology to known proteins and best agreement with the RNA-Seq data. Conclusions SnowyOwl efficiently uses RNA-Seq data to produce accurate gene models in both well-studied and novel fungal genomes. The source code for the SnowyOwl pipeline (in Python) and a web interface (in PHP) is freely available from http://sourceforge.net/projects/snowyowl/. PMID:24980894

  8. Sewerage Mapping and Information System of the Metropolis of Tokyo (SEMIS) : Details of the Development and Outline of the System

    NASA Astrophysics Data System (ADS)

    Kawakami, Kouichi; Sekita, Mitsunobu

    It is essential to manage sewerage ledgers as information when maintaining and controlling sewerage, one of the infrastructures of cities. The Bureau of Sewerage developed the full scale Sewerage Mapping and Information System (SEMIS), the first trial done by a local government in this country and has operated it since 1985. Before the development the questionnaires were conducted to survey the use of sewerage ledgers by staffs engaged in sewage works, and means of improving how to prepare plans of sewerage were considered based on them. Employing these means the Bureau made a database of plans and descriptions which comprise sewerage ledgers, and then constructed the computer system which manages it comprehensively. The details of the development and the system outline are described.

  9. FAST TRACK COMMUNICATION Far-field x-ray phase contrast imaging has no detailed information on the object

    NASA Astrophysics Data System (ADS)

    Kohn, V. G.; Argunova, T. S.; Je, J. H.

    2010-11-01

    We show that x-ray phase contrast images of some objects with a small cross-section diameter d satisfy a condition for a far-field approximation d Lt r1 where r1 = (λz)1/2, λ is the x-ray wavelength, z is the distance from the object to the detector. In this case the size of the image does not match the size of the object contrary to the edge detection technique. Moreover, the structure of the central fringes of the image is universal, i.e. it is independent of the object cross-section structure. Therefore, these images have no detailed information on the object.

  10. Academic detailing.

    PubMed

    Shankar, P R; Jha, N; Piryani, R M; Bajracharya, O; Shrestha, R; Thapa, H S

    2010-01-01

    There are a number of sources available to prescribers to stay up to date about medicines. Prescribers in rural areas in developing countries however, may not able to access some of them. Interventions to improve prescribing can be educational, managerial, and regulatory or use a mix of strategies. Detailing by the pharmaceutical industry is widespread. Academic detailing (AD) has been classically seen as a form of continuing medical education in which a trained health professional such as a physician or pharmacist visits physicians in their offices to provide evidence-based information. Face-to-face sessions, preferably on an individual basis, clear educational and behavioural objectives, establishing credibility with respect to objectivity, stimulating physician interaction, use of concise graphic educational materials, highlighting key messages, and when possible, providing positive reinforcement of improved practices in follow-up visits can increase success of AD initiatives. AD is common in developed countries and certain examples have been cited in this review. In developing countries the authors have come across reports of AD in Pakistan, Sudan, Argentina and Uruguay, Bihar state in India, Zambia, Cuba, Indonesia and Mexico. AD had a consistent, small but potentially significant impact on prescribing practices. AD has much less resources at its command compared to the efforts by the industry. Steps have to be taken to formally start AD in Nepal and there may be specific hindering factors similar to those in other developing nations. PMID:21209521

  11. Center for Information Services, Phase II: Detailed System Design and Programming, Part 7 - Text Processing, Phase IIA Final Report.

    ERIC Educational Resources Information Center

    Silva, Georgette M.

    Libraries, as well as larger information networks, are necessarily based upon the storage of information files consisting in many cases of written materials and texts such as books, serials, abstracts, manuscripts and archives. At the present stage of the "information explosion" no librarian can afford to ignore the contribution of modern…

  12. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  13. The role of cognitive switching in head-up displays. [to determine pilot ability to accurately extract information from either of two sources

    NASA Technical Reports Server (NTRS)

    Fischer, E.

    1979-01-01

    The pilot's ability to accurately extract information from either one or both of two superimposed sources of information was determined. Static, aerial, color 35 mm slides of external runway environments and slides of corresponding static head-up display (HUD) symbology were used as the sources. A three channel tachistoscope was utilized to show either the HUD alone, the scene alone, or the two slides superimposed. Cognitive performance of the pilots was assessed by determining the percentage of correct answers given to two HUD related questions, two scene related questions, or one HUD and one scene related question.

  14. Transient Auditory Storage of Acoustic Details Is Associated with Release of Speech from Informational Masking in Reverberant Conditions

    ERIC Educational Resources Information Center

    Huang, Ying; Huang, Qiang; Chen, Xun; Wu, Xihong; Li, Liang

    2009-01-01

    Perceptual integration of the sound directly emanating from the source with reflections needs both temporal storage and correlation computation of acoustic details. We examined whether the temporal storage is frequency dependent and associated with speech unmasking. In Experiment 1, a break in correlation (BIC) between interaurally correlated…

  15. Dynamism & Detail

    ERIC Educational Resources Information Center

    Flannery, Maura C.

    2004-01-01

    New material discovered in the study of cell research is presented for the benefit of biology teachers. Huge amounts of data are being generated in fields like cellular dynamics, and it is felt that people's understanding of the cell is becoming much more complex and detailed.

  16. Accurate prediction of protein secondary structure and solvent accessibility by consensus combiners of sequence and structure information

    PubMed Central

    Pollastri, Gianluca; Martin, Alberto JM; Mooney, Catherine; Vullo, Alessandro

    2007-01-01

    Background Structural properties of proteins such as secondary structure and solvent accessibility contribute to three-dimensional structure prediction, not only in the ab initio case but also when homology information to known structures is available. Structural properties are also routinely used in protein analysis even when homology is available, largely because homology modelling is lower throughput than, say, secondary structure prediction. Nonetheless, predictors of secondary structure and solvent accessibility are virtually always ab initio. Results Here we develop high-throughput machine learning systems for the prediction of protein secondary structure and solvent accessibility that exploit homology to proteins of known structure, where available, in the form of simple structural frequency profiles extracted from sets of PDB templates. We compare these systems to their state-of-the-art ab initio counterparts, and with a number of baselines in which secondary structures and solvent accessibilities are extracted directly from the templates. We show that structural information from templates greatly improves secondary structure and solvent accessibility prediction quality, and that, on average, the systems significantly enrich the information contained in the templates. For sequence similarity exceeding 30%, secondary structure prediction quality is approximately 90%, close to its theoretical maximum, and 2-class solvent accessibility roughly 85%. Gains are robust with respect to template selection noise, and significant for marginal sequence similarity and for short alignments, supporting the claim that these improved predictions may prove beneficial beyond the case in which clear homology is available. Conclusion The predictive system are publicly available at the address . PMID:17570843

  17. Robust fundamental frequency estimation in sustained vowels: Detailed algorithmic comparisons and information fusion with adaptive Kalman filtering

    PubMed Central

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.

    2014-01-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269

  18. Robust fundamental frequency estimation in sustained vowels: detailed algorithmic comparisons and information fusion with adaptive Kalman filtering.

    PubMed

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A; Fox, Cynthia; Ramig, Lorraine O; Clifford, Gari D

    2014-05-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F(0)) of speech signals. This study examines ten F(0) estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F(0) in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F(0) estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F(0) estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F(0) estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F(0) estimation is required. PMID:24815269

  19. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  20. The Complex Trial Protocol (CTP): a new, countermeasure-resistant, accurate, P300-based method for detection of concealed information.

    PubMed

    Rosenfeld, J Peter; Labkovsky, Elena; Winograd, Michael; Lui, Ming A; Vandenboom, Catherine; Chedid, Erica

    2008-11-01

    A new P300-based concealed information test is described. A rare probe or frequent irrelevant stimulus appears in the same trial in which a target or nontarget later appears. One response follows the first stimulus and uses the same button press regardless of stimulus type. A later second stimulus then appears: target or nontarget. The subject presses one button for a target, another for a nontarget. A P300 to the first stimulus indicates probe recognition. One group was tested in 3 weeks for denied recognition of familiar information. Weeks 1 and 3 were guilty conditions; Week 2 was a countermeasure (CM) condition. The probe-irrelevant differences were significant in all weeks, and percent hits were >90%. Attempted CM use was detectable via elevated reaction time to the first stimulus. In a replication, results were similar. False positive rates for both studies varied from 0 to .08, yielding J. B. Grier (1971) A' values from .9 to 1.0. PMID:18823418

  1. Crowdsourcing detailed flood data

    NASA Astrophysics Data System (ADS)

    Walliman, Nicholas; Ogden, Ray; Amouzad*, Shahrzhad

    2015-04-01

    Over the last decade the average annual loss across the European Union due to flooding has been 4.5bn Euros, but increasingly intense rainfall, as well as population growth, urbanisation and the rising costs of asset replacements, may see this rise to 23bn Euros a year by 2050. Equally disturbing are the profound social costs to individuals, families and communities which in addition to loss of lives include: loss of livelihoods, decreased purchasing and production power, relocation and migration, adverse psychosocial effects, and hindrance of economic growth and development. Flood prediction, management and defence strategies rely on the availability of accurate information and flood modelling. Whilst automated data gathering (by measurement and satellite) of the extent of flooding is already advanced it is least reliable in urban and physically complex geographies where often the need for precise estimation is most acute. Crowdsourced data of actual flood events is a potentially critical component of this allowing improved accuracy in situations and identifying the effects of local landscape and topography where the height of a simple kerb, or discontinuity in a boundary wall can have profound importance. Mobile 'App' based data acquisition using crowdsourcing in critical areas can combine camera records with GPS positional data and time, as well as descriptive data relating to the event. This will automatically produce a dataset, managed in ArcView GIS, with the potential for follow up calls to get more information through structured scripts for each strand. Through this local residents can provide highly detailed information that can be reflected in sophisticated flood protection models and be core to framing urban resilience strategies and optimising the effectiveness of investment. This paper will describe this pioneering approach that will develop flood event data in support of systems that will advance existing approaches such as developed in the in the UK

  2. The Lunar Laser Ranging Experiment: Accurate ranges have given a large improvement in the lunar orbit and new selenophysical information.

    PubMed

    Bender, P L; Currie, D G; Poultney, S K; Alley, C O; Dicke, R H; Wilkinson, D T; Eckhardt, D H; Faller, J E; Kaula, W M; Mulholland, J D; Plotkin, H H; Silverberg, E C; Williams, J G

    1973-10-19

    The lunar ranging measurements now being made at the McDonald Observatory have an accuracy of 1 nsec in round-trip travel time. This corresponds to 15 cm in the one-way distance. The use of lasers with pulse-lengths of less than 1 nsec is expected to give an accuracy of 2 to 3 cm in the next few years. A new station is under construction in Hawaii, and additional stations in other countries are either in operation or under development. It is hoped that these stations will form the basis for a worldwide network to determine polar motion and earth rotation on a regular basis, and will assist in providing information about movement of the tectonic plates making up the earth's surface. Several mobile lunar ranging stations with telescopes having diameters of 1.0 m or less could, in the future, greatly extend the information obtainable about motions within and between the tectonic plates. The data obtained so far by the McDonald Observatory have been used to generate a new lunar ephemeris based on direct numerical integration of the equations of motion for the moon and planets. With this ephemeris, the range to the three Apollo retro-reflectors can be fit to an accuracy of 5 m by adjusting the differences in moments of inertia of the moon about its principal axes, the selenocentric coordinates of the reflectors, and the McDonald longitude. The accuracy of fitting the results is limited currently by errors of the order of an arc second in the angular orientation of the moon, as derived from the best available theory of how the moon rotates in response to the torques acting on it. Both a new calculation of the moon's orientation as a function of time based on direct numerical integration of the torque equations and a new analytic theory of the moon's orientation are expected to be available soon, and to improve considerably the accuracy of fitting the data. The accuracy already achieved routinely in lunar laser ranging represents a hundredfold improvement over any

  3. American Academy of Orthopaedic Surgeons Disclosure Policy Fails to Accurately Inform Its Members of Potential Conflicts of Interest.

    PubMed

    Tanzer, Dylan; Smith, Karen; Tanzer, Michael

    2015-07-01

    The American Academy of Orthopaedic Surgeons (AAOS) disclosure policy is designed to ensure that members involved in education or policy development remain free of outside influence. Although mandatory for these members, it is voluntary for the rest of the AAOS membership. To determine surgeon compliance with disclosure policy, we conducted a study in which we compared surgeon-consultants' disclosures as posted on 6 major orthopedic companies' websites in 2011 with those surgeons' disclosures as listed in AAOS disclosure program records. We found that 549 AAOS members were identified by at least 1 company as having received consulting payments. Overall, 44% of AAOS members did not comply with disclosure policy, or their information was not available on the AAOS website (range, 37%-61%). This study demonstrated that AAOS's policy of mandatory disclosure for select members and voluntary disclosure for all other members is ineffective. The AAOS disclosure program and the potential consequences of noncompliance need to be reevaluated by the organization if it wants its program to succeed. PMID:26161764

  4. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information.

    PubMed

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W; Jin, Ke; Du, Yingge; Neeway, James J; Ryan, Joseph V; Hu, Dehong; Zhang, Kelvin H L; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest. PMID:25953490

  5. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information

    NASA Astrophysics Data System (ADS)

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W.; Jin, Ke; Du, Yingge; Neeway, James J.; Ryan, Joseph V.; Hu, Dehong; Zhang, Kelvin H. L.; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest.

  6. General Information about Testicular Cancer

    MedlinePlus

    ... professional versions have detailed information written in technical language. The patient versions are written in easy-to-understand, nontechnical language. Both versions have cancer information that is accurate ...

  7. General Information about Prostate Cancer

    MedlinePlus

    ... professional versions have detailed information written in technical language. The patient versions are written in easy-to-understand, nontechnical language. Both versions have cancer information that is accurate ...

  8. General Information about Urethral Cancer

    MedlinePlus

    ... professional versions have detailed information written in technical language. The patient versions are written in easy-to-understand, nontechnical language. Both versions have cancer information that is accurate ...

  9. Details of assessing information content of the Tropospheric Infrared Mapping Spectrometers (TIMS) GEO-CAPE instrument concept when applied for several infrared ozone bands

    NASA Astrophysics Data System (ADS)

    Rairden, R. L.; Kumer, J. B.; Roche, A. E.; Desouza-Machado, S. G.; Chatfield, R. B.; Blatherwick, R.

    2009-12-01

    With support of NASA ESTO Instrument Incubator Program (IIP) Tropospheric Infrared Mapping Spectrometers (TIMS) have been demonstrated for multi-layer retrieval of Atmospheric CO. Two TIMS units operating in spectral regions centered at 2.33 and 4.68 µm were developed for this demonstration. Here we present the details of scaling the characteristics of the demonstration measurements including spectral range, sample spacing and resolution, and noise per sample to the scenario of GEO-CAPE mission and to several additional wave length regions. This includes the detail of expanding to more than two spectral regions. It includes an example of scaling the noise as demonstrated by the demonstration measurements to the space case, and to other spectral regions. Common with our oral presentation, methods based on these scaled instrument characteristics for estimating vertical information content are reviewed. The methods are applied and estimated vertical information content of measurements in ozone bands near 9.4, 4.7, 3.6 and 3.3 µm and in various combinations of these bands is presented. A simple simultaneous retrieval of humidity and ozone from atmospheric spectral absorption data in the 3.3 and 3.6 µm regions that was obtained by a solar viewing FTS is briefly presented. This is partially analogous to the retrieval of ozone from the earth’s surface diffuse reflection of sunlight as viewed from space. It supports the premise that these space borne measurements can contribute to the quality of the GEO-CAPE ozone measurements.

  10. Short communication: Selecting the most informative mid-infrared spectra wavenumbers to improve the accuracy of prediction models for detailed milk protein content.

    PubMed

    Niero, G; Penasa, M; Gottardo, P; Cassandro, M; De Marchi, M

    2016-03-01

    The objective of this study was to investigate the ability of mid-infrared spectroscopy (MIRS) to predict protein fraction contents of bovine milk samples by applying uninformative variable elimination (UVE) procedure to select the most informative wavenumber variables before partial least squares (PLS) analysis. Reference values (n=114) of protein fractions were measured using reversed-phase HPLC and spectra were acquired through MilkoScan FT6000 (Foss Electric A/S, Hillerød, Denmark). Prediction models were built using the full data set and tested with a leave-one-out cross-validation. Compared with MIRS models developed using standard PLS, the UVE procedure reduced the number of wavenumber variables to be analyzed through PLS regression and improved the accuracy of prediction by 6.0 to 66.7%. Good predictions were obtained for total protein, total casein (CN), and α-CN, which included αS1- and αS2-CN; moderately accurate predictions were observed for κ-CN and total whey protein; and unsatisfactory results were obtained for β-CN, α-lactalbumin, and β-lactoglobulin. Results indicated that UVE combined with PLS is a valid approach to enhance the accuracy of MIRS prediction models for milk protein fractions. PMID:26774721

  11. Establishing an Appropriate Level of Detail (LoD) for a Building Information Model (BIM) - West Block, Parliament Hill, Ottawa, Canada

    NASA Astrophysics Data System (ADS)

    Fai, S.; Rafeiro, J.

    2014-05-01

    In 2011, Public Works and Government Services Canada (PWGSC) embarked on a comprehensive rehabilitation of the historically significant West Block of Canada's Parliament Hill. With over 17 thousand square meters of floor space, the West Block is one of the largest projects of its kind in the world. As part of the rehabilitation, PWGSC is working with the Carleton Immersive Media Studio (CIMS) to develop a building information model (BIM) that can serve as maintenance and life-cycle management tool once construction is completed. The scale and complexity of the model have presented many challenges. One of these challenges is determining appropriate levels of detail (LoD). While still a matter of debate in the development of international BIM standards, LoD is further complicated in the context of heritage buildings because we must reconcile the LoD of the BIM with that used in the documentation process (terrestrial laser scan and photogrammetric survey data). In this paper, we will discuss our work to date on establishing appropriate LoD within the West Block BIM that will best serve the end use. To facilitate this, we have developed a single parametric model for gothic pointed arches that can be used for over seventy-five unique window types present in the West Block. Using the AEC (CAN) BIM as a reference, we have developed a workflow to test each of these window types at three distinct levels of detail. We have found that the parametric Gothic arch significantly reduces the amount of time necessary to develop scenarios to test appropriate LoD.

  12. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  13. General Information about Adult Brain Tumors

    MedlinePlus

    ... professional versions have detailed information written in technical language. The patient versions are written in easy-to-understand, nontechnical language. Both versions have cancer information that is accurate ...

  14. Lost in translation: preclinical studies on 3,4-methylenedioxymethamphetamine provide information on mechanisms of action, but do not allow accurate prediction of adverse events in humans

    PubMed Central

    Green, AR; King, MV; Shortall, SE; Fone, KCF

    2012-01-01

    3,4-Methylenedioxymethamphetamine (MDMA) induces both acute adverse effects and long-term neurotoxic loss of brain 5-HT neurones in laboratory animals. However, when choosing doses, most preclinical studies have paid little attention to the pharmacokinetics of the drug in humans or animals. The recreational use of MDMA and current clinical investigations of the drug for therapeutic purposes demand better translational pharmacology to allow accurate risk assessment of its ability to induce adverse events. Recent pharmacokinetic studies on MDMA in animals and humans are reviewed and indicate that the risks following MDMA ingestion should be re-evaluated. Acute behavioural and body temperature changes result from rapid MDMA-induced monoamine release, whereas long-term neurotoxicity is primarily caused by metabolites of the drug. Therefore acute physiological changes in humans are fairly accurately mimicked in animals by appropriate dosing, although allometric dosing calculations have little value. Long-term changes require MDMA to be metabolized in a similar manner in experimental animals and humans. However, the rate of metabolism of MDMA and its major metabolites is slower in humans than rats or monkeys, potentially allowing endogenous neuroprotective mechanisms to function in a species specific manner. Furthermore acute hyperthermia in humans probably limits the chance of recreational users ingesting sufficient MDMA to produce neurotoxicity, unlike in the rat. MDMA also inhibits the major enzyme responsible for its metabolism in humans thereby also assisting in preventing neurotoxicity. These observations question whether MDMA alone produces long-term 5-HT neurotoxicity in human brain, although when taken in combination with other recreational drugs it may induce neurotoxicity. LINKED ARTICLES This article is commented on by Parrott, pp. 1518–1520 of this issue. To view this commentary visit http://dx.doi.org/10.1111/j.1476-5381.2012.01941.x and to view the the

  15. Student Information Systems Demystified: The Increasing Demand for Accurate, Timely Data Means Schools and Districts Are Relying Heavily on SIS Technologies

    ERIC Educational Resources Information Center

    McIntire, Todd

    2004-01-01

    Student information systems, one of the first applications of computer technology in education, are undergoing a significant transition yet again. The first major shift in SIS technologies occurred about 15 years ago when they evolved from mainframe programs to client-server solutions. Now, vendors across the board are offering centralized…

  16. Detailed cross sections of the Eocene Green River Formation along the north and east margins of the Piceance Basin, western Colorado, using measured sections and drill hole information

    USGS Publications Warehouse

    Johnson, Ronald C.

    2014-01-01

    This report presents two detailed cross sections of the Eocene Green River Formation in the Piceance Basin, northwestern Colorado, constructed from eight detailed measured sections, fourteen core holes, and two rotary holes. The Eocene Green River Formation in the Piceance Basin contains the world’s largest known oil shale deposit with more than 1.5 billion barrels of oil in place. It was deposited in Lake Uinta, a long-lived saline lake that once covered much of the Piceance Basin and the Uinta Basin to the west. The cross sections extend across the northern and eastern margins of the Piceance Basin and are intended to aid in correlating between surface sections and the subsurface in the basin.

  17. Detailed design package for design of a video system providing optimal visual information for controlling payload and experiment operations with television

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A detailed description of a video system for controlling space shuttle payloads and experiments is presented in the preliminary design review and critical design review, first and second engineering design reports respectively, and in the final report submitted jointly with the design package. The material contained in the four subsequent sections of the package contains system descriptions, design data, and specifications for the recommended 2-view system. Section 2 contains diagrams relating to the simulation test configuration of the 2-view system. Section 3 contains descriptions and drawings of the deliverable breadboard equipment. A description of the recommended system is contained in Section 4 with equipment specifications in Section 5.

  18. Effect of detailed information in the minority game: optimality of 2-day memory and enhanced efficiency due to random exogenous data

    NASA Astrophysics Data System (ADS)

    Sasidevan, V.

    2016-07-01

    In the minority game (MG), an odd number of heterogeneous and adaptive agents choose between two alternatives and those who end up on the minority side win. When the information available to the agents to make their choice is the identity of the minority side for the past m days, it is well-known that the emergent coordination among the agents is maximum when m∼ {{log}2}(N) . The optimal memory-length thus increases with the system size. In this work we show that, in MG when the information available to the agents to make their choice is the strength of the minority side for the past m days, the optimal memory length for the agents is always two (m  =  2) for large enough system sizes. The system is inefficient for m  =  1 and converges to random choice behaviour for m>2 for large N. Surprisingly, providing the agents with uniformly and randomly sampled m  =  1 exogenous information results in an increase in coordination between them compared to the case of endogenous information with any value of m. This is in stark contrast to the conventional MG, where agent’s coordination is invariant or gets worse with respect to such random exogenous information.

  19. Detailed requirements document for Stowage List and Hardware Tracking System (SLAHTS). [computer based information management system in support of space shuttle orbiter stowage configuration

    NASA Technical Reports Server (NTRS)

    Keltner, D. J.

    1975-01-01

    The stowage list and hardware tracking system, a computer based information management system, used in support of the space shuttle orbiter stowage configuration and the Johnson Space Center hardware tracking is described. The input, processing, and output requirements that serve as a baseline for system development are defined.

  20. Accurate prediction of protein structural classes by incorporating predicted secondary structure information into the general form of Chou's pseudo amino acid composition.

    PubMed

    Kong, Liang; Zhang, Lichao; Lv, Jinfeng

    2014-03-01

    Extracting good representation from protein sequence is fundamental for protein structural classes prediction tasks. In this paper, we propose a novel and powerful method to predict protein structural classes based on the predicted secondary structure information. At the feature extraction stage, a 13-dimensional feature vector is extracted to characterize general contents and spatial arrangements of the secondary structural elements of a given protein sequence. Specially, four segment-level features are designed to elevate discriminative ability for proteins from the α/β and α+β classes. After the features are extracted, a multi-class non-linear support vector machine classifier is used to implement protein structural classes prediction. We report extensive experiments comparing the proposed method to the state-of-the-art in protein structural classes prediction on three widely used low-similarity benchmark datasets: FC699, 1189 and 640. Our method achieves competitive performance on prediction accuracies, especially for the overall prediction accuracies which have exceeded the best reported results on all of the three datasets. PMID:24316044

  1. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  2. Detailed mechanism of benzene oxidation

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1987-01-01

    A detailed quantitative mechanism for the oxidation of benzene in both argon and nitrogen diluted systems is presented. Computed ignition delay time for argon diluted mixtures are in satisfactory agreement with experimental results for a wide range of initial conditions. An experimental temperature versus time profile for a nitrogen diluted oxidation was accurately matched and several concentration profiles were matched qualitatively. Application of sensitivity analysis has given approximate rate constant expressions for the two dominant heat release reactions, the oxidation of C6H5 and C5H5 radicals by molecular oxygen.

  3. Patient and general practitioner attitudes to taking medication to prevent cardiovascular disease after receiving detailed information on risks and benefits of treatment: a qualitative study

    PubMed Central

    2011-01-01

    Background There are now effective drugs to prevent cardiovascular disease and guidelines recommend their use. Patients do not always choose to accept preventive medication at levels of risk reduction recommended in guidelines. The purpose of the study was to identify and explore the attitudes of patients and general practitioners towards preventative medication for cardiovascular disease (CVD) after they have received information about it; to identify implications for practice and prescribing. Methods Qualitative interviews with GPs and patients following presentation of in depth information about CVD risks and the absolute effects of medication. Setting: GP practices in Birmingham, United Kingdom. Results In both populations: wide variation on attitudes to preventative medication; concerns about unnecessary drug taking & side effects; preferring to consider lifestyle changes first. In patient population: whatever their attitudes to medication were, the vast majority explained that they would ultimately do what their GP recommended; there was some misunderstanding of the distinction between curative and preventative medication. A common theme was the degree of trust in their doctors' judgement and recommendations, which contrasted with scepticism of the role of pharmaceutical companies and academics. Scepticism in guidelines was also common among doctors although many nevertheless recommended treatment for their patients Conclusions A guideline approach to prescribing preventative medication could be against the interests and preferences of the patient. GPs must take extra care to explain what preventative medication is and why it is recommended, attempt to discern preferences and make recommendations balancing these potentially conflicting concerns. PMID:21703010

  4. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  5. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  6. We Built This House; It's Time to Move in: Leveraging Existing DICOM Structure to More Completely Utilize Readily Available Detailed Contrast Administration Information.

    PubMed

    Hirsch, Jeffrey D; Siegel, Eliot L; Balasubramanian, Sridhar; Wang, Kenneth C

    2015-08-01

    The Digital Imaging and Communications in Medicine (DICOM) standard is the universal format for interoperability in medical imaging. In addition to imaging data, DICOM has evolved to support a wide range of imaging metadata including contrast administration data that is readily available from many modern contrast injectors. Contrast agent, route of administration, start and stop time, volume, flow rate, and duration can be recorded using DICOM attributes [1]. While this information is sparsely and inconsistently recorded in routine clinical practice, it could potentially be of significant diagnostic value. This work will describe parameters recorded by automatic contrast injectors, summarize the DICOM mechanisms available for tracking contrast injection data, and discuss the role of such data in clinical radiology. PMID:25700615

  7. A Qualitative and Quantitative Comparison of Sedimentary Palynomorphs, Lipid Biomarkers and Fossil DNA: Which Tool Provides the Most Detailed Paleoecological and Paleoenvironmental Information?

    NASA Astrophysics Data System (ADS)

    Boere, A. C.; Abbas, B.; Rijpstra, W. I.; Volkman, J. K.; Sinninghe Damsté, J. S.; Coolen, M. J.

    2007-12-01

    In recent years, it was shown that Holocene planktonic taxa could be identified at the species-level based on their preserved fossil genetic signatures (fossil DNA) in either cold and/or sulfidic lacustrine and marine settings. Many of those species are not known to leave morphologically recognizable remains and thus most likely would have escaped microscopic determination and enumeration. In addition, fossil DNA analysis also revealed past planktonic taxa for which no specific lipid biomarkers are known. However, the best, and yet unexplored, approach to validate fossil DNA as paleoenvironmental tool would be based on a direct qualitative and quantitative comparison of each of the above described proxies. In an up to 2700-year-old record of undisturbed sulfidic sediments from the Small Meromictic Basin in Ellis Fjord, Antarctica, we compared the quantitative and qualitative distribution of fossil ribosomal DNA of phototrophic algae like diatoms, dinoflagellates and past chemocline bacteria (green sulfur bacteria) with the distribution of their fossil lipid biomarkers: highly branched isoprenoids, dinosterol and carotenoids. For dinoflagellates, we performed a comparative microscopic (palynological) analysis of fossil dinocysts whereas comparative diatom microfossil data was available from the literature. We will discuss important new insights about the cell-specific fate of fossil DNA and the additional paleoenvironmental information which was revealed from the fossil DNA analysis.

  8. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  9. Accurate and Accidental Empathy.

    ERIC Educational Resources Information Center

    Chandler, Michael

    The author offers two controversial criticisms of what are rapidly becoming standard assessment procedures for the measurement of empathic skill. First, he asserts that assessment procedures which attend exclusively to the accuracy with which subjects are able to characterize other people's feelings provide little or no useful information about…

  10. Detail One Half of Wood Truss, Detail One Quarter Plan ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail One Half of Wood Truss, Detail One Quarter Plan of Floor Beams & Bottom Truss Cord, Detail at A Plan, Detail at B Plan - Covered Bridge, Spanning Darby Creek, North Lewisburg, Champaign County, OH

  11. Chord Splicing & Joining Detail; Chord & CrossBracing Joint Details; ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Chord Splicing & Joining Detail; Chord & Cross-Bracing Joint Details; Cross Bracing Center Joint Detail; Chord & Diagonal Joint Detail - Vermont Covered Bridge, Highland Park, spanning Kokomo Creek at West end of Deffenbaugh Street (moved to), Kokomo, Howard County, IN

  12. Arch & Chord Joint Detail; Crossbracing Center Joint Detail; Chord, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Arch & Chord Joint Detail; Crossbracing Center Joint Detail; Chord, Panel Post, Tie & Diagonal Brace Joint Detail; Chord, Panel Post, Tie & Crossbracing Joint Detail - Dunlapsville Covered Bridge, Spanning East Fork Whitewater River, Dunlapsville, Union County, IN

  13. Detailed sensory memory, sloppy working memory.

    PubMed

    Sligte, Ilja G; Vandenbroucke, Annelinde R E; Scholte, H Steven; Lamme, Victor A F

    2010-01-01

    Visual short-term memory (VSTM) enables us to actively maintain information in mind for a brief period of time after stimulus disappearance. According to recent studies, VSTM consists of three stages - iconic memory, fragile VSTM, and visual working memory - with increasingly stricter capacity limits and progressively longer lifetimes. Still, the resolution (or amount of visual detail) of each VSTM stage has remained unexplored and we test this in the present study. We presented people with a change detection task that measures the capacity of all three forms of VSTM, and we added an identification display after each change trial that required people to identify the "pre-change" object. Accurate change detection plus pre-change identification requires subjects to have a high-resolution representation of the "pre-change" object, whereas change detection or identification only can be based on the hunch that something has changed, without exactly knowing what was presented before. We observed that people maintained 6.1 objects in iconic memory, 4.6 objects in fragile VSTM, and 2.1 objects in visual working memory. Moreover, when people detected the change, they could also identify the pre-change object on 88% of the iconic memory trials, on 71% of the fragile VSTM trials and merely on 53% of the visual working memory trials. This suggests that people maintain many high-resolution representations in iconic memory and fragile VSTM, but only one high-resolution object representation in visual working memory. PMID:21897823

  14. LF460 detail design

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is the final technical report documenting the detail design of the LF460, and advanced turbotip lift fan intended for application with the YJ97-GE-100 turbojet jet generator to a V/STOL transport research aircraft. Primary objective of the design was to achieve a low noise level while maintaining the high thrust/weight ratio capability of a high pressure ratio lift fan. Report covers design requirements and summarizes activities and final results in the areas of aerodynamic and mechanical design, component and system performance, acoustic features and final noise predictions.

  15. Details of meiosis

    SciTech Connect

    1993-12-31

    Chapter 18, discusses the details of meiosis, beginning with the structure and number of chiasmata, i.e., the cytological term for two homologous chromosomes forming a bivalent which begin to repel each other until they are held together only at the point of crossing-over. The synaptonemal complex which consists of two lateral elements which contain protein and RNA is also discussed. The chapter concludes with a description of meiosis in polyploids, human meiosis, and the behavior of X and Y chromosomes. 28 refs., 8 figs.

  16. Detailed Debunking of Denial

    NASA Astrophysics Data System (ADS)

    Enting, I. G.; Abraham, J. P.

    2012-12-01

    The disinformation campaign against climate science has been compared to a guerilla war whose tactics undermine the traditional checks and balances of science. One comprehensive approach has to been produce archives of generic responses such as the websites of RealClimate and SkepticalScience. We review our experiences with an alternative approach of detailed responses to a small number of high profile cases. Our particular examples were Professor Ian Plimer and Christopher Monckton, the Third Viscount Monckton of Brenchley, each of whom has been taken seriously by political leaders in our respective countries. We relate our experiences to comparable examples such as John Mashey's analysis of the Wegman report and the formal complaints about Lomborg's "Skeptical Environmentalist" and Durkin's "Great Global Warming Swindle". Our two approaches used contrasting approaches: an on-line video of a lecture vs an evolving compendium of misrepresentations. Additionally our approaches differed in the emphasis. The analysis of Monckton concentrated on the misrepresentation of the science, while the analysis of Plimer concentrated on departures from accepted scientific practice: fabrication of data, misrepresentation of cited sources and unattributed use of the work of others. Benefits of an evolving compendium were the ability to incorporate contributions from members of the public who had identified additional errors and the scope for addressing new aspects as they came to public attention. `Detailed debunking' gives non-specialists a reference point for distinguishing non-science when engaging in public debate.

  17. Clinical professional governance for detailed clinical models.

    PubMed

    Goossen, William; Goossen-Baremans, Anneke

    2013-01-01

    This chapter describes the need for Detailed Clinical Models for contemporary Electronic Health Systems, data exchange and data reuse. It starts with an explanation of the components related to Detailed Clinical Models with a brief summary of knowledge representation, including terminologies representing clinic relevant "things" in the real world, and information models that abstract these in order to let computers process data about these things. Next, Detailed Clinical Models are defined and their purpose is described. It builds on existing developments around the world and accumulates in current work to create a technical specification at the level of the International Standards Organization. The core components of properly expressed Detailed Clinical Models are illustrated, including clinical knowledge and context, data element specification, code bindings to terminologies and meta-information about authors, versioning among others. Detailed Clinical Models to date are heavily based on user requirements and specify the conceptual and logical levels of modelling. It is not precise enough for specific implementations, which requires an additional step. However, this allows Detailed Clinical Models to serve as specifications for many different kinds of implementations. Examples of Detailed Clinical Models are presented both in text and in Unified Modelling Language. Detailed Clinical Models can be positioned in health information architectures, where they serve at the most detailed granular level. The chapter ends with examples of projects that create and deploy Detailed Clinical Models. All have in common that they can often reuse materials from earlier projects, and that strict governance of these models is essential to use them safely in health care information and communication technology. Clinical validation is one point of such governance, and model testing another. The Plan Do Check Act cycle can be applied for governance of Detailed Clinical Models

  18. Detail of Triton

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This color photo of Neptune's large satellite Triton was obtained on Aug. 24 1989 at a range of 530,000 kilometers (330,000 miles). The resolution is about 10 kilometers (6.2 miles), sufficient to begin to show topographic detail. The image was made from pictures taken through the green, violet and ultraviolet filters. In this technique, regions that are highly reflective in the ultraviolet appear blue in color. In reality, there is no part of Triton that would appear blue to the eye. The bright southern hemisphere of Triton, which fills most of this frame, is generally pink in tone as is the even brighter equatorial band. The darker regions north of the equator also tend to be pink or reddish in color. JPL manages the Voyager project for NASA's Office of Space Science, Washington, DC.

  19. Cydonia Region - detail

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Detail cut out of PIA01235, Mars Orbiter Camera (MOC) image of a 4.42 by 82.94 km area of the Cydonia Region. The left image is raw, the right has been filtered and contrast enhanced.

    Orbit: 220

    Range: 444.21 km

    Resolution: 4.32 m/pixel

    Emission angle: 44.66 degrees

    Incidence angle: 64.96 degrees

    Phase angle: 61.97 degrees

    Scan rate: 0.1 degree/sec

    Start time: periapsis + 375 sec

    Sequence submitted to JPL: Sat 04/04/98 15:15 PST

    Image acquired by MOC: Sun 04/05/98 00:39:37 PST

    Data retrieved from JPL: Mon 04/06/98 09:05 PDT

  20. South Polar Details

    NASA Technical Reports Server (NTRS)

    2005-01-01

    22 September 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows details among some of the eroded layer outcrops of the martian south polar region. Much of the south polar region of Mars is covered by a thick unit of layered material. For decades, the layers have been assumed to consist of a mixture of dust and ice, but it is equally possible that the materials are sedimentary rocks. This image was captured during southern spring, at a time when some of the surface was still covered by seasonal carbon dioxide (CO2) frost.

    Location near: 86.5oS, 116.6oW Image width: width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Spring

  1. Horizontal Cross Bracing Detail, Vertical Cross Bracing Detail, Horizontal Cross ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Horizontal Cross Bracing Detail, Vertical Cross Bracing Detail, Horizontal Cross Bracing Joint, Vertical Cross Bracing End Detail - Ceylon Covered Bridge, Limberlost Park, spanning Wabash River at County Road 900 South, Geneva, Adams County, IN

  2. roof truss detail, historic strap hinge detail Chopawamsic Recreational ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    roof truss detail, historic strap hinge detail - Chopawamsic Recreational Demonstration Area - Cabin Camp 1, Main Arts and Crafts Lodge, Prince William Forest Park, Triangle, Prince William County, VA

  3. double hung window details, hall window details, entrance door profiles ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    double hung window details, hall window details, entrance door profiles - Chopawamsic Recreational Demonstration Area - Cabin Camp 1, Help's Quarters, Prince William Forest Park, Triangle, Prince William County, VA

  4. DETAILED STUDIES OF ELECTRON COOLING FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; BRUHWILER, D.L.; ABELL, D.T.; SIDORIN, A.O.

    2005-09-18

    High-energy electron cooling for RHIC presents many unique features and challenges. An accurate estimate of the cooling times requires detailed simulation of the electron cooling process. The first step towards such calculations is to have an accurate description of the cooling force. Numerical simulations are being used to explore various features of the friction force which appear due to several effects, including the anisotropy of the electron distribution in velocity space and the effect of a strong solenoidal magnetic field. These aspects are being studied in detail using the VORFAL code, which explicitly resolves close binary collisions. Results are compared with available asymptotic and empirical formulas and also, using the BETACOOL code, with direct numerical integration of less approximate expressions over the specified electron distribution function.

  5. Detailed Studies of Electron Cooling Friction Force

    SciTech Connect

    Fedotov, A. V.; Bruhwiler, D. L.; Abell, D. T.; Sidorin, A. O.

    2006-03-20

    High-energy electron cooling for RHIC presents many unique features and challenges. An accurate estimate of the cooling times requires detailed simulation of the electron cooling process. The first step towards such calculations is to have an accurate description of the cooling force. Numerical simulations are being used to explore various features of the friction force which appear due to several effects, including the anisotropy of the electron distribution in velocity space and the effect of a strong solenoidal magnetic field. These aspects are being studied in detail using the VORPAL code, which explicitly resolves close binary collisions. Results are compared with available asymptotic and empirical formulas and also, using the BETACOOL code, with direct numerical integration of less approximate expressions over the specified electron distribution function.

  6. Characteristics of Academic Detailing: Results of a Literature Review

    PubMed Central

    Van Hoof, Thomas J.; Harrison, Lisa G.; Miller, Nicole E.; Pappas, Maryanne S.; Fischer, Michael A.

    2015-01-01

    Background Academic detailing is an evidence-based strategy to improve patient care. Efforts to understand the intervention and to use it strategically require an understanding of its important characteristics. A recent systematic review and a subsequent reporting framework call for more accurate and complete reporting of continuing medical education interventions. Objectives Building on a previously published systematic review of 69 studies, we sought to determine how an expanded set of 106 academic detailing studies, including many recently published articles, fared with respect to reporting of important data about this intervention. Methods We conducted a search of MEDLINE, the Cumulative Index to Nursing and Allied Health Literature (clinical) database, and Scopus, from which we identified 38 additional randomized controlled trials published from August 2007 through March 2013. Including the original 69 studies, we abstracted 106 available English-language studies and quantitatively analyzed information about 4 important characteristics of academic detailing: content of visits, clinicians being visited, communication process underlying visits, and outreach workers making visits. Results We found considerable variation (36.5%-100%) in the extent of reporting intervention characteristics, especially about the communication process underlying visits and the outreach workers making visits. The best overall documentation of intervention characteristics of any single study was 68%. Results also demonstrate wide variation in the approach to academic detailing. Conclusions This study demonstrates the need for a standardized approach to collecting and reporting data about academic detailing interventions. Our findings also highlight opportunities for using academic detailing more effectively in research and quality-improvement efforts. PMID:26702333

  7. Fast and accurate automated cell boundary determination for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  8. Details Of Collision-Avoidance Study

    NASA Technical Reports Server (NTRS)

    Chappell, Sheryl L.; Billings, Charles E.; Olsen, M. Christine; Scott, Barry C.; Tuttell, Robert J.; Kozon, Thomas E.

    1990-01-01

    Report provides background information on and detailed description of study of pilots' use of traffic-alert and collision-avoidance system (TCAS II) in simulated flights. Described in article, "Evaluation of an Aircraft-Collision-Avoidance System" (ARC-12367). Plans, forms, training narratives, scripts, questionnaires, and other information compiled.

  9. Morphological details in bloodstain particles.

    PubMed

    De Wael, K; Lepot, L

    2015-01-01

    During the commission of crimes blood can be transferred to the clothing of the offender or on other crime related objects. Bloodstain particles are sub-millimetre sized flakes that are lost from dried bloodstains. The nature of these red particles is easily confirmed using spectroscopic methods. In casework, bloodstain particles showing highly detailed morphological features were observed. These provided a rationale for a series of experiments described in this work. It was found that the "largest" particles are shed from blood deposited on polyester and polyamide woven fabrics. No particles are lost from the stains made on absorbent fabrics and from those made on knitted fabrics. The morphological features observed in bloodstain particles can provide important information on the substrates from which they were lost. PMID:25437904

  10. Detailed modelling of the 21-cm forest

    NASA Astrophysics Data System (ADS)

    Semelin, B.

    2016-01-01

    The 21-cm forest is a promising probe of the Epoch of Reionization. The local state of the intergalactic medium (IGM) is encoded in the spectrum of a background source (radio-loud quasars or gamma-ray burst afterglow) by absorption at the local 21-cm wavelength, resulting in a continuous and fluctuating absorption level. Small-scale structures (filaments and minihaloes) in the IGM are responsible for the strongest absorption features. The absorption can also be modulated on large scales by inhomogeneous heating and Wouthuysen-Field coupling. We present the results from a simulation that attempts to preserve the cosmological environment while resolving some of the small-scale structures (a few kpc resolution in a 50 h-1 Mpc box). The simulation couples the dynamics and the ionizing radiative transfer and includes X-ray and Lyman lines radiative transfer for a detailed physical modelling. As a result we find that soft X-ray self-shielding, Ly α self-shielding and shock heating all have an impact on the predicted values of the 21-cm optical depth of moderately overdense structures like filaments. A correct treatment of the peculiar velocities is also critical. Modelling these processes seems necessary for accurate predictions and can be done only at high enough resolution. As a result, based on our fiducial model, we estimate that LOFAR should be able to detect a few (strong) absorptions features in a frequency range of a few tens of MHz for a 20 mJy source located at z = 10, while the SKA would extract a large fraction of the absorption information for the same source.

  11. The Finer Details: Climate Modeling

    NASA Technical Reports Server (NTRS)

    2000-01-01

    If you want to know whether you will need sunscreen or an umbrella for tomorrow's picnic, you can simply read the local weather report. However, if you are calculating the impact of gas combustion on global temperatures, or anticipating next year's rainfall levels to set water conservation policy, you must conduct a more comprehensive investigation. Such complex matters require long-range modeling techniques that predict broad trends in climate development rather than day-to-day details. Climate models are built from equations that calculate the progression of weather-related conditions over time. Based on the laws of physics, climate model equations have been developed to predict a number of environmental factors, for example: 1. Amount of solar radiation that hits the Earth. 2. Varying proportions of gases that make up the air. 3. Temperature at the Earth's surface. 4. Circulation of ocean and wind currents. 5. Development of cloud cover. Numerical modeling of the climate can improve our understanding of both the past and, the future. A model can confirm the accuracy of environmental measurements taken. in, the past and can even fill in gaps in those records. In addition, by quantifying the relationship between different aspects of climate, scientists can estimate how a future change in one aspect may alter the rest of the world. For example, could an increase in the temperature of the Pacific Ocean somehow set off a drought on the other side of the world? A computer simulation could lead to an answer for this and other questions. Quantifying the chaotic, nonlinear activities that shape our climate is no easy matter. You cannot run these simulations on your desktop computer and expect results by the time you have finished checking your morning e-mail. Efficient and accurate climate modeling requires powerful computers that can process billions of mathematical calculations in a single second. The NCCS exists to provide this degree of vast computing capability.

  12. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  13. Alcohol and remembering a hypothetical sexual assault: Can people who were under the influence of alcohol during the event provide accurate testimony?

    PubMed

    Flowe, Heather D; Takarangi, Melanie K T; Humphries, Joyce E; Wright, Deborah S

    2016-09-01

    We examined the influence of alcohol on remembering an interactive hypothetical sexual assault scenario in the laboratory using a balanced placebo design. Female participants completed a memory test 24 hours and 4 months later. Participants reported less information (i.e., responded "don't know" more often to questions) if they were under the influence of alcohol during scenario encoding. The accuracy of the information intoxicated participants reported did not differ compared to sober participants, however, suggesting intoxicated participants were effectively monitoring the accuracy of their memory at test. Additionally, peripheral details were remembered less accurately than central details, regardless of the intoxication level; and memory accuracy for peripheral details decreased by a larger amount compared to central details across the retention interval. Finally, participants were more accurate if they were told they were drinking alcohol rather than a placebo. We discuss theoretical implications for alcohol myopia and memory regulation, together with applied implications for interviewing intoxicated witnesses. PMID:26278075

  14. Detail, Scandia Hotel, view to southwest showing details of balloon ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail, Scandia Hotel, view to southwest showing details of balloon framing, including full two-story studs notched to carry girts supporting second story floor joists (210mm lens) - Scandia Hotel, 225 First Street, Eureka, Humboldt County, CA

  15. 56. DETAIL OF SOUTH SIDE OF WINDING SHEAVES: Detail of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    56. DETAIL OF SOUTH SIDE OF WINDING SHEAVES: Detail of south side of the winding sheaves. These sheaves drive the California Street cable. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  16. Detail view of ornamental lighting detail of southwest corner of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail view of ornamental lighting detail of southwest corner of Sixth Street Bridge. Looking northeast - Sixth Street Bridge, Spanning 101 Freeway at Sixth Street, Los Angeles, Los Angeles County, CA

  17. Detail of pumps in troughs, detail of truss attachment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail of pumps in troughs, detail of truss - attachment to the wall - as well as the troughs themselves. Interior of the main hatchery building, view to the east. - Prairie Creek Fish Hatchery, Hwy. 101, Orick, Humboldt County, CA

  18. 22. PIER NO. IV DETAIL, WITH DETAIL OF SOUTHWEST PORTAL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    22. PIER NO. IV DETAIL, WITH DETAIL OF SOUTHWEST PORTAL AND SOUTHEAST WEB OF THROUGH TRUSS; VIEW TO NORTH - Nebraska City Bridge, Spanning Missouri River near Highway 2 between Nebraska & Iowa, Nebraska City, Otoe County, NE

  19. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  20. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  1. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  2. Influences on physicians' adoption of electronic detailing (e-detailing).

    PubMed

    Alkhateeb, Fadi M; Doucette, William R

    2009-01-01

    E-detailing means using digital technology: internet, video conferencing and interactive voice response. There are two types of e-detailing: interactive (virtual) and video. Currently, little is known about what factors influence physicians' adoption of e-detailing. The objectives of this study were to test a model of physicians' adoption of e-detailing and to describe physicians using e-detailing. A mail survey was sent to a random sample of 2000 physicians practicing in Iowa. Binomial logistic regression was used to test the model of influences on physician adoption of e-detailing. On the basis of Rogers' model of adoption, the independent variables included relative advantage, compatibility, complexity, peer influence, attitudes, years in practice, presence of restrictive access to traditional detailing, type of specialty, academic affiliation, type of practice setting and control variables. A total of 671 responses were received giving a response rate of 34.7%. A total of 141 physicians (21.0%) reported using of e-detailing. The overall adoption model for using either type of e-detailing was found to be significant. Relative advantage, peer influence, attitudes, type of specialty, presence of restrictive access and years of practice had significant influences on physician adoption of e-detailing. The model of adoption of innovation is useful to explain physicians' adoption of e-detailing. PMID:19306198

  3. Making detailed predictions makes (some) predictions worse

    NASA Astrophysics Data System (ADS)

    Kelly, Theresa F.

    In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.

  4. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  5. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  6. The fine details of evolution.

    PubMed

    Laskowski, Roman A; Thornton, Janet M; Sternberg, Michael J E

    2009-08-01

    Charles Darwin's theory of evolution was based on studies of biology at the species level. In the time since his death, studies at the molecular level have confirmed his ideas about the kinship of all life on Earth and have provided a wealth of detail about the evolutionary relationships between different species and a deeper understanding of the finer workings of natural selection. We now have a wealth of data, including the genome sequences of a wide range of organisms, an even larger number of protein sequences, a significant knowledge of the three-dimensional structures of proteins, DNA and other biological molecules, and a huge body of information about the operation of these molecules as systems in the molecular machinery of all living things. This issue of Biochemical Society Transactions contains papers from oral presentations given at a Biochemical Society Focused Meeting to commemorate the 200th Anniversary of Charles Darwin's birth, held on 26-27 January 2009 at the Wellcome Trust Conference Centre, Cambridge. The talks reported on some of the insights into evolution which have been obtained from the study of protein sequences, structures and systems. PMID:19614583

  7. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  8. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  9. An attempt to obtain a detailed declination chart from the United States magnetic anomaly map

    USGS Publications Warehouse

    Alldredge, L.R.

    1989-01-01

    Modern declination charts of the United States show almost no details. It was hoped that declination details could be derived from the information contained in the existing magnetic anomaly map of the United States. This could be realized only if all of the survey data were corrected to a common epoch, at which time a main-field vector model was known, before the anomaly values were computed. Because this was not done, accurate declination values cannot be determined. In spite of this conclusion, declination values were computed using a common main-field model for the entire United States to see how well they compared with observed values. The computed detailed declination values were found to compare less favourably with observed values of declination than declination values computed from the IGRF 1985 model itself. -from Author

  10. A software implementation for detailed volume conductor modelling in electrophysiology using finite difference method.

    PubMed

    Kauppinen, P; Hyttinen, J; Laarne, P; Malmivuo, J

    1999-02-01

    There is an evolving need for new information available by employing patient tailored anatomically accurate computer models of the electrical properties of the human body. Because construction of a computer model can be difficult and laborious to perform sufficiently well, devised models have varied greatly in the level of anatomical accuracy incorporated in them. This has restricted the validity of conducted simulations. In the present study, a versatile software package was developed to transform anatomic voxel data into accurate finite difference method volume conductor models conveniently and in a short time. The package includes components for model construction, simulation, visualisation and detailed analysis of simulation output based on volume conductor theory. Due to the methods developed, models can comprise more anatomical details than the prior computer models. Several models have been constructed, for example, a highly detailed 3-D anatomically accurate computer model of the human thorax as a volume conductor utilising the US National Library of Medicine's (NLM) Visible Human Man (VHM) digital anatomy data. Based on the validation runs the developed software package is readily applicable in analysis of a wide range of bioelectric field problems. PMID:10092033

  11. Occupation Competency Profile: Steel Detailer Program.

    ERIC Educational Resources Information Center

    Alberta Learning, Edmonton. Apprenticeship and Industry Training.

    This document presents information about the apprenticeship training program of Alberta, Canada, in general and the steel detailer program in particular. The first part of the document discusses the following items: Alberta's apprenticeship and industry training system; the apprenticeship and industry training committee structure; local…

  12. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  13. Detail Chemical Composition of M33 Globular Clusters

    NASA Astrophysics Data System (ADS)

    McWilliam, Andrew

    2008-08-01

    I propose to perform the first high-resolution detailed chemical abundance study of globular clusters (GCs) in the Local Group spiral galaxy M33 with Keck HIRES. My integrated-light technique permits detailed chemical abundance measurement (and approximate ages) for old populations at larger distance than ever done before. The basic goal is to accurately define the chemical abundance properties of the M33 GC system, for a comparison with the Milky Way, LMC, SMC and Local Group dwarf galaxies. Abundances of Fe, (alpha)-elements, Na and Al will constrain the relative contributions of Type Ia and Type II SNe and probe ~1 Gyr enrichment timescales. The s-process elements (e.g. Zr, Y, Ba, La), made by AGB stars, will probe timescales of several Gyr. These elements are sensitive to whether chemical enrichment occurred slowly, or in a burst, and whether the enrichment was global, or occurred in disparate systems, such as dwarf galaxies that were later accreted. The project will provide basic information to advance an understanding of chemical enrichment and nucleosynthesis, and galaxy evolution.

  14. Computed tomography:the details.

    SciTech Connect

    Doerry, Armin Walter

    2007-07-01

    Computed Tomography (CT) is a well established technique, particularly in medical imaging, but also applied in Synthetic Aperture Radar (SAR) imaging. Basic CT imaging via back-projection is treated in many texts, but often with insufficient detail to appreciate subtleties such as the role of non-uniform sampling densities. Herein are given some details often neglected in many texts.

  15. 13. DETAIL VIEW OF BRIDGE, LOOKING SOUTH FROM ROADWAY. DETAIL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. DETAIL VIEW OF BRIDGE, LOOKING SOUTH FROM ROADWAY. DETAIL VIEW OF THE PIERS AND LIGHTING FIXTURES ON THE COLORADO STREET BRIDGE. THIS VIEW SHOWS A PORTION OF THE BRIDGE ALONG THE SOUTH SIDE OF THE ROADWAY. EACH FIXTURE ALSO ORIGINALLY HAD FOUR ADDITIONAL GLOBES, WHICH EXTENDED FROM THE COLUMN BELOW THE MAIN GLOBE. THE 'REFUGE' SEATING AREAS ARE ORIGINAL, WHILE THE RAILING IS A LATER ADDITION. - Colorado Street Bridge, Spanning Arroyo Seco at Colorado Boulevard, Pasadena, Los Angeles County, CA

  16. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  17. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  18. Accurate calculation of the absolute free energy of binding for drug molecules† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c5sc02678d Click here for additional data file.

    PubMed Central

    Aldeghi, Matteo; Heifetz, Alexander; Bodkin, Michael J.; Knapp, Stefan

    2016-01-01

    Accurate prediction of binding affinities has been a central goal of computational chemistry for decades, yet remains elusive. Despite good progress, the required accuracy for use in a drug-discovery context has not been consistently achieved for drug-like molecules. Here, we perform absolute free energy calculations based on a thermodynamic cycle for a set of diverse inhibitors binding to bromodomain-containing protein 4 (BRD4) and demonstrate that a mean absolute error of 0.6 kcal mol–1 can be achieved. We also show a similar level of accuracy (1.0 kcal mol–1) can be achieved in pseudo prospective approach. Bromodomains are epigenetic mark readers that recognize acetylation motifs and regulate gene transcription, and are currently being investigated as therapeutic targets for cancer and inflammation. The unprecedented accuracy offers the exciting prospect that the binding free energy of drug-like compounds can be predicted for pharmacologically relevant targets. PMID:26798447

  19. An R package that automatically collects and archives details for reproducible computing

    PubMed Central

    2014-01-01

    Background It is scientifically and ethically imperative that the results of statistical analysis of biomedical research data be computationally reproducible in the sense that the reported results can be easily recapitulated from the study data. Some statistical analyses are computationally a function of many data files, program files, and other details that are updated or corrected over time. In many applications, it is infeasible to manually maintain an accurate and complete record of all these details about a particular analysis. Results Therefore, we developed the rctrack package that automatically collects and archives read only copies of program files, data files, and other details needed to computationally reproduce an analysis. Conclusions The rctrack package uses the trace function to temporarily embed detail collection procedures into functions that read files, write files, or generate random numbers so that no special modifications of the primary R program are necessary. At the conclusion of the analysis, rctrack uses these details to automatically generate a read only archive of data files, program files, result files, and other details needed to recapitulate the analysis results. Information about this archive may be included as an appendix of a report generated by Sweave or knitR. Here, we describe the usage, implementation, and other features of the rctrack package. The rctrack package is freely available from http://www.stjuderesearch.org/site/depts/biostats/rctrack under the GPL license. PMID:24886202

  20. 5 CFR 370.104 - Length of details.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Length of details. 370.104 Section 370.104 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS INFORMATION TECHNOLOGY EXCHANGE PROGRAM § 370.104 Length of details. (a) Details may be for a period of between 3...

  1. The thermodynamic cost of accurate sensory adaptation

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2015-03-01

    Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.

  2. Eros details enhanced by computer processing

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The NEAR camera's ability to show details of Eros's surface is limited by the spacecraft's distance from the asteroid. That is, the closer the spacecraft is to the surface, the more that details are visible. However mission scientists regularly use computer processing to squeeze an extra measure of information from returned data. In a technique known as 'superresolution', many images of the same scene acquired at very, very slightly different camera pointing are carefully overlain and processed to bright out details even smaller than would normally be visible. In this rendition constructed out of 20 image frames acquired Feb. 12, 2000, the images have first been enhanced ('high-pass filtered') to accentuate small-scale details. Superresolution was then used to bring out features below the normal ability of the camera to resolve.

    Built and managed by The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, NEAR was the first spacecraft launched in NASA's Discovery Program of low-cost, small-scale planetary missions. See the NEAR web page at http://near.jhuapl.edu for more details.

  3. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  4. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  5. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  6. Seductive Details in Multimedia Messages

    ERIC Educational Resources Information Center

    Rey, Gunter Daniel

    2011-01-01

    The seductive detail principle asserts that people learn more deeply from a multimedia presentation when interesting but irrelevant adjuncts are excluded rather than included. However, critics could argue that studies about this principle contain methodological problems. The recent experiment attempts to overcome these problems. Students (N = 108)…

  7. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  8. Accurate wavelength calibration method for flat-field grating spectrometers.

    PubMed

    Du, Xuewei; Li, Chaoyang; Xu, Zhe; Wang, Qiuping

    2011-09-01

    A portable spectrometer prototype is built to study wavelength calibration for flat-field grating spectrometers. An accurate calibration method called parameter fitting is presented. Both optical and structural parameters of the spectrometer are included in the wavelength calibration model, which accurately describes the relationship between wavelength and pixel position. Along with higher calibration accuracy, the proposed calibration method can provide information about errors in the installation of the optical components, which will be helpful for spectrometer alignment. PMID:21929865

  9. A Study to Test the Feasibility of Determining Whether Classified Want Ads in Daily Newspapers Are an Accurate Reflection of Local Labor Markets and of Significant Use to Employers and Job Seekers. Final Report.

    ERIC Educational Resources Information Center

    Olympus Research Corp., Salt Lake City, UT.

    The report summarizes findings of a detailed study to test the feasibility of determining whether want ads in daily newspapers are (1) an accurate reflection of local labor markets and (2) of significant use to employers and job seekers. The study found that want ads are a limited source of information about local labor markets. They are of some…

  10. Must Kohn-Sham oscillator strengths be accurate at threshold?

    SciTech Connect

    Yang Zenghui; Burke, Kieron; Faassen, Meta van

    2009-09-21

    The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.

  11. Cost-effective and detailed modelling of compressor manifold vibrations

    SciTech Connect

    Eijk, A.; Egas, G.; Smeulers, J.P.M.

    1996-12-01

    In systems with large reciprocating compressors, so-called compressor manifold vibrations can contribute to fatigue failure of the pipe system. These vibrations are excited by pulsation-induced forces and by forces generated by the compressor. This paper describes an advanced and accurate method for predicting vibration levels and cyclic stresses in critical parts of the piping, based on detailed modelling of the pulsations and compressor parts. Although detailed finite element modelling is applied, the method can compete in ease of use with analytical methods and is far more accurate. The effectiveness of this approach will be demonstrated by a case study in which a detailed compressor manifold vibration analysis has been carried out. The compressor is used for underground storage of natural gas.

  12. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  13. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  14. How Accurate are SuperCOSMOS Positions?

    NASA Astrophysics Data System (ADS)

    Schaefer, Adam; Hunstead, Richard; Johnston, Helen

    2014-02-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of 0.16 arcsec in right ascension and declination. While overall systematic offsets were < 0.1 arcsec in each hemisphere, both the systematics and scatter were greater in the north.

  15. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  16. Aircraft empennage structural detail design

    NASA Technical Reports Server (NTRS)

    Meholic, Greg; Brown, Rhonda; Hall, Melissa; Harvey, Robert; Singer, Michael; Tella, Gustavo

    1993-01-01

    This project involved the detailed design of the aft fuselage and empennage structure, vertical stabilizer, rudder, horizontal stabilizer, and elevator for the Triton primary flight trainer. The main design goals under consideration were to illustrate the integration of the control systems devices used in the tail surfaces and their necessary structural supports as well as the elevator trim, navigational lighting system, electrical systems, tail-located ground tie, and fuselage/cabin interface structure. Accommodations for maintenance, lubrication, adjustment, and repairability were devised. Weight, fabrication, and (sub)assembly goals were addressed. All designs were in accordance with the FAR Part 23 stipulations for a normal category aircraft.

  17. "Influence Method". Detailed mathematical description

    NASA Astrophysics Data System (ADS)

    Rios, I. J.; Mayer, R. E.

    2015-07-01

    A new method for the absolute determination of nuclear particle flux in the absence of known detector efficiency, the "Influence Method", was recently published (I.J. Rios and R.E. Mayer, Nuclear Instruments & Methods in Physics Research A 775 (2015) 99-104). The method defines an estimator for the population and another estimator for the efficiency. In this article we present a detailed mathematical description which yields the conditions for its application, the probability distributions of the estimators and their characteristic parameters. An analysis of the different cases leads to expressions of the estimators and their uncertainties.

  18. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    SciTech Connect

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.

  19. Accurate Inventories Of Irrigated Land

    NASA Technical Reports Server (NTRS)

    Wall, S.; Thomas, R.; Brown, C.

    1992-01-01

    System for taking land-use inventories overcomes two problems in estimating extent of irrigated land: only small portion of large state surveyed in given year, and aerial photographs made on 1 day out of year do not provide adequate picture of areas growing more than one crop per year. Developed for state of California as guide to controlling, protecting, conserving, and distributing water within state. Adapted to any large area in which large amounts of irrigation water needed for agriculture. Combination of satellite images, aerial photography, and ground surveys yields data for computer analysis. Analyst also consults agricultural statistics, current farm reports, weather reports, and maps. These information sources aid in interpreting patterns, colors, textures, and shapes on Landsat-images.

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. Accurate Method for Determining Adhesion of Cantilever Beams

    SciTech Connect

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  6. Accurate method for determining adhesion of cantilever beams

    SciTech Connect

    de Boer, M.P.; Michalske, T.A.

    1999-07-01

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying. {copyright} {ital 1999 American Institute of Physics.}

  7. Recovering and preventing loss of detailed memory: differential rates of forgetting for detail types in episodic memory.

    PubMed

    Sekeres, Melanie J; Bonasia, Kyra; St-Laurent, Marie; Pishdadian, Sara; Winocur, Gordon; Grady, Cheryl; Moscovitch, Morris

    2016-02-01

    Episodic memories undergo qualitative changes with time, but little is known about how different aspects of memory are affected. Different types of information in a memory, such as perceptual detail, and central themes, may be lost at different rates. In patients with medial temporal lobe damage, memory for perceptual details is severely impaired, while memory for central details is relatively spared. Given the sensitivity of memory to loss of details, the present study sought to investigate factors that mediate the forgetting of different types of information from naturalistic episodic memories in young healthy adults. The study investigated (1) time-dependent loss of "central" and "peripheral" details from episodic memories, (2) the effectiveness of cuing with reminders to reinstate memory details, and (3) the role of retrieval in preventing forgetting. Over the course of 7 d, memory for naturalistic events (film clips) underwent a time-dependent loss of peripheral details, while memory for central details (the core or gist of events) showed significantly less loss. Giving brief reminders of the clips just before retrieval reinstated memory for peripheral details, suggesting that loss of details is not always permanent, and may reflect both a storage and retrieval deficit. Furthermore, retrieving a memory shortly after it was encoded prevented loss of both central and peripheral details, thereby promoting retention over time. We consider the implications of these results for behavioral and neurobiological models of retention and forgetting. PMID:26773100

  8. Towards accurate and automatic morphing

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Sharkey, Paul M.

    2005-10-01

    Image morphing has proved to be a powerful tool for generating compelling and pleasing visual effects and has been widely used in entertainment industry. However, traditional image morphing methods suffer from a number of drawbacks: feature specification between images is tedious and the reliance on 2D information ignores the possible advantages to be gained from 3D knowledge. In this paper, we utilize recent advantages of computer vision technologies to diminish these drawbacks. By analyzing multi view geometry theories, we propose a processing pipeline based on three reference images. We first seek a few seed correspondences using robust methods and then recover multi view geometries using the seeds, through bundle adjustment. Guided by the recovered two and three view geometries, a novel line matching algorithm across three views is then deduced, through edge growth, line fitting and two and three view geometry constraints. Corresponding lines on a novel image is then obtained by an image transfer method and finally matched lines are fed into the traditional morphing methods and novel images are generated. Novel images generated by this pipeline have advantages over traditional morphing methods: they have an inherent 3D foundation and are therefore physically close to real scenes; not only images located between the baseline connecting two reference image centers, but also extrapolated images away from the baseline are possible; and the whole processing can be either wholly automatic, or at least the tedious task of feature specification in traditional morphing methods can be greatly relieved.

  9. Simulating immersed particle collisions: the Devil's in the details

    NASA Astrophysics Data System (ADS)

    Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart

    2015-11-01

    Simulating densely-packed particle-laden flows with any degree of confidence requires accurate modeling of particle-particle collisions. To this end, we investigate a few collision models from the fluids and granular flow communities using sphere-wall collisions, which have been studied by a number of experimental groups. These collisions involve enough complexities--gravity, particle-wall lubrication forces, particle-wall contact stresses, particle-wake interactions--to challenge any collision model. Evaluating the successes and shortcomings of the collision models, we seek improvements in order to obtain more consistent results. We will highlight several implementation details that are crucial for obtaining accurate results.

  10. The Seductive Details Effect in Technology-Delivered Instruction

    ERIC Educational Resources Information Center

    Towler, Annette; Kraiger, Kurt; Sitzmann, Traci; Van Overberghe, Courtney; Cruz, Jaime; Ronen, Eyal; Stewart, David

    2008-01-01

    Seductive details are highly interesting information tangential to course objectives. The inclusion of seductive details generally harms performance on recall tests, but few studies have used multimedia training or investigated effects on performance on recognition tests or transfer tasks. We conducted two studies using computer-based training,…

  11. 5 CFR 370.107 - Details to small business concerns.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Details to small business concerns. 370.107 Section 370.107 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS INFORMATION TECHNOLOGY EXCHANGE PROGRAM § 370.107 Details to small business concerns. (a) The head of...

  12. 5 CFR 370.107 - Details to small business concerns.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Details to small business concerns. 370.107 Section 370.107 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS INFORMATION TECHNOLOGY EXCHANGE PROGRAM § 370.107 Details to small business concerns. (a) The head of...

  13. 5 CFR 370.107 - Details to small business concerns.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Details to small business concerns. 370.107 Section 370.107 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS INFORMATION TECHNOLOGY EXCHANGE PROGRAM § 370.107 Details to small business concerns. (a) The head of each agency must take such actions as may...

  14. 5 CFR 370.107 - Details to small business concerns.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Details to small business concerns. 370.107 Section 370.107 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS INFORMATION TECHNOLOGY EXCHANGE PROGRAM § 370.107 Details to small business concerns. (a) The head of...

  15. Space Telecommunications Radio System (STRS) Architecture, Tutorial Part 2 - Detailed

    NASA Technical Reports Server (NTRS)

    Handler, Louis

    2014-01-01

    The STRS architecture detail presentation presents each requirement in the STRS Architecture Standard with some examples and supporting information. The purpose is to give a platform provider, application provider, or application integrator a better, more detailed understanding of the STRS Architecture Standard and its use.

  16. 5 CFR 370.107 - Details to small business concerns.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Details to small business concerns. 370... INFORMATION TECHNOLOGY EXCHANGE PROGRAM § 370.107 Details to small business concerns. (a) The head of each... organizations in each calendar year, at least 20 percent are to small business concerns, in accordance with 5...

  17. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  18. A Generalized Detailed Balance Relation

    NASA Astrophysics Data System (ADS)

    Ruelle, David

    2016-06-01

    Given a system M in a thermal bath we obtain a generalized detailed balance relation for the ratio r=π _τ (K→ J)/π _τ (J→ K) of the transition probabilities M:J→ K and M:K→ J in time τ . We assume an active bath, containing solute molecules in metastable states. These molecules may react with M and the transition J→ K occurs through different channels α involving different reactions with the bath. We find that r=sum p^α r^α , where p^α is the probability that channel α occurs, and r^α depends on the amount of heat (more precisely enthalpy) released to the bath in channel α.

  19. Revealing Small-Scale Details

    NASA Astrophysics Data System (ADS)

    Crawford, Ken

    Astrophotographers face amazing challenges in their pursuit of acquiring high quality data and then turn that data into beautiful images that viewers admire. We carefully set up our imaging systems with amazing precision so each exposure contains well focused sharp sub frames. For every exposure there exist a myriad of variables that can work against us to potentially degrade the details of the celestial objects we record. Some of these variables are within our control and some are not. Even the variables beyond our control can often be compensated for by processing adjustments for maximizing the potential of our images. My goal is to share some of the methods I have found useful in recovering those elements that bring clarity to our pictures.

  20. Detailed fuel spray analysis techniques

    NASA Technical Reports Server (NTRS)

    Mularz, E. J.; Bosque, M. A.; Humenik, F. M.

    1983-01-01

    Detailed fuel spray analyses are a necessary input to the analytical modeling of the complex mixing and combustion processes which occur in advanced combustor systems. It is anticipated that by controlling fuel-air reaction conditions, combustor temperatures can be better controlled, leading to improved combustion system durability. Thus, a research program is underway to demonstrate the capability to measure liquid droplet size, velocity, and number density throughout a fuel spray and to utilize this measurement technique in laboratory benchmark experiments. The research activities from two contracts and one grant are described with results to data. The experiment to characterize fuel sprays is also described. These experiments and data should be useful for application to and validation of turbulent flow modeling to improve the design systems of future advanced technology engines.

  1. A Generalized Detailed Balance Relation

    NASA Astrophysics Data System (ADS)

    Ruelle, David

    2016-08-01

    Given a system M in a thermal bath we obtain a generalized detailed balance relation for the ratio r=π _τ (K→ J)/π _τ (J→ K) of the transition probabilities M:J→ K and M:K→ J in time τ . We assume an active bath, containing solute molecules in metastable states. These molecules may react with M and the transition J→ K occurs through different channels α involving different reactions with the bath. We find that r=sum p^α r^α , where p^α is the probability that channel α occurs, and r^α depends on the amount of heat (more precisely enthalpy) released to the bath in channel α.

  2. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  3. Recovering and Preventing Loss of Detailed Memory: Differential Rates of Forgetting for Detail Types in Episodic Memory

    ERIC Educational Resources Information Center

    Sekeres, Melanie J.; Bonasia, Kyra; St-Laurent, Marie; Pishdadian, Sara; Winocur, Gordon; Grady, Cheryl; Moscovitch, Morris

    2016-01-01

    Episodic memories undergo qualitative changes with time, but little is known about how different aspects of memory are affected. Different types of information in a memory, such as perceptual detail, and central themes, may be lost at different rates. In patients with medial temporal lobe damage, memory for perceptual details is severely impaired,…

  4. Accurate deterministic solutions for the classic Boltzmann shock profile

    NASA Astrophysics Data System (ADS)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  5. Measuring Fisher Information Accurately in Correlated Neural Populations

    PubMed Central

    Kohn, Adam; Pouget, Alexandre

    2015-01-01

    Neural responses are known to be variable. In order to understand how this neural variability constrains behavioral performance, we need to be able to measure the reliability with which a sensory stimulus is encoded in a given population. However, such measures are challenging for two reasons: First, they must take into account noise correlations which can have a large influence on reliability. Second, they need to be as efficient as possible, since the number of trials available in a set of neural recording is usually limited by experimental constraints. Traditionally, cross-validated decoding has been used as a reliability measure, but it only provides a lower bound on reliability and underestimates reliability substantially in small datasets. We show that, if the number of trials per condition is larger than the number of neurons, there is an alternative, direct estimate of reliability which consistently leads to smaller errors and is much faster to compute. The superior performance of the direct estimator is evident both for simulated data and for neuronal population recordings from macaque primary visual cortex. Furthermore we propose generalizations of the direct estimator which measure changes in stimulus encoding across conditions and the impact of correlations on encoding and decoding, typically denoted by Ishuffle and Idiag respectively. PMID:26030735

  6. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  7. The ACPMAPS system: A detailed overview

    SciTech Connect

    Fischler, M.

    1992-07-01

    This paper describes the ACPMAPS computing system -- its purpose, its hardware architecture, how the system is used, and relevant programming paradigms and concepts. Features of the hardware and software will be discussed in some detail, both quantitative and qualitative. This should give some perspective as to the suitability of the ACPMAPS system for various classes of applications, and as to where this system stands in the spectrum of today`s supercomputers. The ACPMAPS project at Fermilab was initiated in 1987 as a collaborations between the Advanced Computer Program (now the Computer R&D department) and the lattice gauge physicists in the Theory department. ACPMAPS is an acronym for Advanced Computer Program Multiple Array Processor System -- this acronym is no longer accurate, but the name has stuck. Although research physics computations were done on ACPMAPS as early as 1989, the full-scale system was commissioned as a reliable physics tool in early 1991. The original ACPMAPS was a 5 Gflop (peak) system. An upgrade by a factor of ten in computer power and memory size, but substituting a new CPU board, will occur during early 1991 -- this is referred to as the new ACPMAPS Upgrade or 50 GF ACPMAPS. The appellation ACPMAPS II has also been applied to the upgrade; this is somewhat of a misnomer, since only one of five major components was changed.

  8. The ACPMAPS system: A detailed overview

    SciTech Connect

    Fischler, M.

    1992-01-01

    This paper describes the ACPMAPS computing system -- its purpose, its hardware architecture, how the system is used, and relevant programming paradigms and concepts. Features of the hardware and software will be discussed in some detail, both quantitative and qualitative. This should give some perspective as to the suitability of the ACPMAPS system for various classes of applications, and as to where this system stands in the spectrum of today's supercomputers. The ACPMAPS project at Fermilab was initiated in 1987 as a collaborations between the Advanced Computer Program (now the Computer R D department) and the lattice gauge physicists in the Theory department. ACPMAPS is an acronym for Advanced Computer Program Multiple Array Processor System -- this acronym is no longer accurate, but the name has stuck. Although research physics computations were done on ACPMAPS as early as 1989, the full-scale system was commissioned as a reliable physics tool in early 1991. The original ACPMAPS was a 5 Gflop (peak) system. An upgrade by a factor of ten in computer power and memory size, but substituting a new CPU board, will occur during early 1991 -- this is referred to as the new ACPMAPS Upgrade or 50 GF ACPMAPS. The appellation ACPMAPS II has also been applied to the upgrade; this is somewhat of a misnomer, since only one of five major components was changed.

  9. Detailed Aerosol Characterization using Polarimetric Measurements

    NASA Astrophysics Data System (ADS)

    Hasekamp, Otto; di Noia, Antonio; Stap, Arjen; Rietjens, Jeroen; Smit, Martijn; van Harten, Gerard; Snik, Frans

    2016-04-01

    Anthropogenic aerosols are believed to cause the second most important anthropogenic forcing of climate change after greenhouse gases. In contrast to the climate effect of greenhouse gases, which is understood relatively well, the negative forcing (cooling effect) caused by aerosols represents the largest reported uncertainty in the most recent assessment of the International Panel on Climate Change (IPCC). To reduce the large uncertainty on the aerosol effects on cloud formation and climate, accurate satellite measurements of aerosol optical properties (optical thickness, single scattering albedo, phase function) and microphysical properties (size distribution, refractive index, shape) are essential. There is growing consensus in the aerosol remote sensing community that multi-angle measurements of intensity and polarization are essential to unambiguously determine all relevant aerosol properties. This presentations adresses the different aspects of polarimetric remote sensing of atmospheric aerosols, including retrieval algorithm development, validation, and data needs for climate and air quality applications. During past years, at SRON-Netherlands Instite for Space Research retrieval algorithms have been developed that make full use of the capabilities of polarimetric measurements. We will show results of detailed aerosol properties from ground-based- (groundSPEX), airborne- (NASA Research Scanning Polarimeter), and satellite (POLDER) measurements. Also we will discuss observational needs for future instrumentation in order to improve our understanding of the role of aerosols in climate change and air quality.

  10. A new debate for Turkish physicians: e-detailing.

    PubMed

    Ventura, Keti; Baybars, Miray; Dedeoglu, Ayla Ozhan

    2012-01-01

    The study presents an empirical analysis of the attitudes of Turkish physicians towards e-detailing practices compared to face-to-face detailing. The findings reveal that although physicians have positive attitudes toward e-detailing, on some points they are still undecided and/or have doubts. The structural model revealed that affect, convenience, and informative content influence their attitude in a positive manner, whereas the personal interaction was found to be a negative factor. Physicians' age and frequency of calls received from representatives are moderators. The present study can be seen as an addition to pharmaceutical marketing, an underresearched study field in Turkey, and e-detailing particularly. PMID:23210675

  11. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  12. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  13. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  14. Novel methods for accurate identification, isolation, and genomic analysis of symptomatic microenvironments in atherosclerotic arteries.

    PubMed

    Slevin, Mark; Baldellou, Maribel; Hill, Elspeth; Alexander, Yvonne; McDowell, Garry; Murgatroyd, Christopher; Carroll, Michael; Degens, Hans; Krupinski, Jerzy; Rovira, Norma; Chowdhury, Mohammad; Serracino-Inglott, Ferdinand; Badimon, Lina

    2014-01-01

    A challenge facing surgeons is identification and selection of patients for carotid endarterectomy or coronary artery bypass/surgical intervention. While some patients with atherosclerosis develop unstable plaques liable to undergo thrombosis, others form more stable plaques and are asymptomatic. Identification of the cellular signaling mechanisms associated with production of the inflammatory, hemorrhagic lesions of mature heterogenic plaques will help significantly in our understanding of the differences in microenvironment associated with development of regions susceptible to rupture and thrombosis and may help to predict the risk of plaque rupture and guide surgical intervention to patients who will most benefit. Here, we demonstrate detailed and novel methodologies for successful and, more importantly, accurate and reproducible extraction, sampling, and analysis of micro-regions in stable and unstable coronary/carotid arteries. This information can be applied to samples from other origins and so should be useful for scientists working with micro-isolation techniques in all fields of biomedical science. PMID:24510873

  15. Accurate oscillator strengths for ultraviolet lines of Ar I - Implications for interstellar material

    NASA Technical Reports Server (NTRS)

    Federman, S. R.; Beideck, D. J.; Schectman, R. M.; York, D. G.

    1992-01-01

    Analysis of absorption from interstellar Ar I in lightly reddened lines of sight provides information on the warm and hot components of the interstellar medium near the sun. The details of the analysis are limited by the quality of the atomic data. Accurate oscillator strengths for the Ar I lines at 1048 and 1067 A and the astrophysical implications are presented. From lifetimes measured with beam-foil spectroscopy, an f-value for 1048 A of 0.257 +/- 0.013 is obtained. Through the use of a semiempirical formalism for treating singlet-triplet mixing, an oscillator strength of 0.064 +/- 0.003 is derived for 1067 A. Because of the accuracy of the results, the conclusions of York and colleagues from spectra taken with the Copernicus satellite are strengthened. In particular, for interstellar gas in the solar neighborhood, argon has a solar abundance, and the warm, neutral material is not pervasive.

  16. Quantification of the Information Limit of Transmission Electron Microscopes

    SciTech Connect

    Barthel, J.; Thust, A.

    2008-11-14

    The resolving power of high-resolution transmission electron microscopes is characterized by the information limit, which reflects the size of the smallest object detail observable with a particular instrument. We introduce a highly accurate measurement method for the information limit, which is suitable for modern aberration-corrected electron microscopes. An experimental comparison with the traditionally applied Young's fringe method yields severe discrepancies and confirms theoretical considerations according to which the Young's fringe method does not reveal the information limit.

  17. Surprising the Writer: Discovering Details through Research and Reading.

    ERIC Educational Resources Information Center

    Broaddus, Karen; Ivey, Gay

    2002-01-01

    Describes how students parallel the process of author Megan McDonald in conducting research and collecting information to provide ideas for the form and content of their writing. Notes that guiding students to record and organize information in a graphic format helps them to transfer those interesting details to new types of writing. (SG)

  18. Accurate mapping of forest types using dense seasonal Landsat time-series

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaolin; Liu, Desheng

    2014-10-01

    An accurate map of forest types is important for proper usage and management of forestry resources. Medium resolution satellite images (e.g., Landsat) have been widely used for forest type mapping because they are able to cover large areas more efficiently than the traditional forest inventory. However, the results of a detailed forest type classification based on these images are still not satisfactory. To improve forest mapping accuracy, this study proposed an operational method to get detailed forest types from dense Landsat time-series incorporating with or without topographic information provided by DEM. This method integrated a feature selection and a training-sample-adding procedure into a hierarchical classification framework. The proposed method has been tested in Vinton County of southeastern Ohio. The detailed forest types include pine forest, oak forest, and mixed-mesophytic forest. The proposed method was trained and validated using ground samples from field plots. The three forest types were classified with an overall accuracy of 90.52% using dense Landsat time-series, while topographic information can only slightly improve the accuracy to 92.63%. Moreover, the comparison between results of using Landsat time-series and a single image reveals that time-series data can largely improve the accuracy of forest type mapping, indicating the importance of phenological information contained in multi-seasonal images for discriminating different forest types. Thanks to zero cost of all input remotely sensed datasets and ease of implementation, this approach has the potential to be applied to map forest types at regional or global scales.

  19. A Detailed Chemical Kinetic Model for TNT

    SciTech Connect

    Pitz, W J; Westbrook, C K

    2005-01-13

    A detailed chemical kinetic mechanism for 2,4,6-tri-nitrotoluene (TNT) has been developed to explore problems of explosive performance and soot formation during the destruction of munitions. The TNT mechanism treats only gas-phase reactions. Reactions for the decomposition of TNT and for the consumption of intermediate products formed from TNT are assembled based on information from the literature and on current understanding of aromatic chemistry. Thermodynamic properties of intermediate and radical species are estimated by group additivity. Reaction paths are developed based on similar paths for aromatic hydrocarbons. Reaction-rate constant expressions are estimated from the literature and from analogous reactions where the rate constants are available. The detailed reaction mechanism for TNT is added to existing reaction mechanisms for RDX and for hydrocarbons. Computed results show the effect of oxygen concentration on the amount of soot precursors that are formed in the combustion of RDX and TNT mixtures in N{sub 2}/O{sub 2} mixtures.

  20. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    NASA Astrophysics Data System (ADS)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    Monitoring sulphur chemistry is thought to be of great importance for exoplanets. Doing this requires detailed knowledge of the spectroscopic properties of sulphur containing molecules such as hydrogen sulphide (H2S) [1], sulphur dioxide (SO2), and sulphur trioxide (SO3). Each of these molecules can be found in terrestrial environments, produced in volcano emissions on Earth, and analysis of their spectroscopic data can prove useful to the characterisation of exoplanets, as well as the study of planets in our own solar system, with both having a possible presence on Venus. A complete, high temperature list of line positions and intensities for H32 2 S is presented. The DVR3D program suite is used to calculate the bound ro-vibration energy levels, wavefunctions, and dipole transition intensities using Radau coordinates. The calculations are based on a newly determined, spectroscopically refined potential energy surface (PES) and a new, high accuracy, ab initio dipole moment surface (DMS). Tests show that the PES enables us to calculate the line positions accurately and the DMS gives satisfactory results for line intensities. Comparisons with experiment as well as with previous theoretical spectra will be presented. The results of this study will form an important addition to the databases which are considered as sources of information for space applications; especially, in analysing the spectra of extrasolar planets, and remote sensing studies for Venus and Earth, as well as laboratory investigations and pollution studies. An ab initio line list for SO3 was previously computed using the variational nuclear motion program TROVE [2], and was suitable for modelling room temperature SO3 spectra. The calculations considered transitions in the region of 0-4000 cm-1 with rotational states up to J = 85, and includes 174,674,257 transitions. A list of 10,878 experimental transitions had relative intensities placed on an absolute scale, and were provided in a form suitable

  1. Enabling Detailed Energy Analyses via the Technology Performance Exchange: Preprint

    SciTech Connect

    Studer, D.; Fleming, K.; Lee, E.; Livingood, W.

    2014-08-01

    One of the key tenets to increasing adoption of energy efficiency solutions in the built environment is improving confidence in energy performance. Current industry practices make extensive use of predictive modeling, often via the use of sophisticated hourly or sub-hourly energy simulation programs, to account for site-specific parameters (e.g., climate zone, hours of operation, and space type) and arrive at a performance estimate. While such methods are highly precise, they invariably provide less than ideal accuracy due to a lack of high-quality, foundational energy performance input data. The Technology Performance Exchange was constructed to allow the transparent sharing of foundational, product-specific energy performance data, and leverages significant, external engineering efforts and a modular architecture to efficiently identify and codify the minimum information necessary to accurately predict product energy performance. This strongly-typed database resource represents a novel solution to a difficult and established problem. One of the most exciting benefits is the way in which the Technology Performance Exchange's application programming interface has been leveraged to integrate contributed foundational data into the Building Component Library. Via a series of scripts, data is automatically translated and parsed into the Building Component Library in a format that is immediately usable to the energy modeling community. This paper (1) presents a high-level overview of the project drivers and the structure of the Technology Performance Exchange; (2) offers a detailed examination of how technologies are incorporated and translated into powerful energy modeling code snippets; and (3) examines several benefits of this robust workflow.

  2. Software For Drawing Design Details Concurrently

    NASA Technical Reports Server (NTRS)

    Crosby, Dewey C., III

    1990-01-01

    Software system containing five computer-aided-design programs enables more than one designer to work on same part or assembly at same time. Reduces time necessary to produce design by implementing concept of parallel or concurrent detailing, in which all detail drawings documenting three-dimensional model of part or assembly produced simultaneously, rather than sequentially. Keeps various detail drawings consistent with each other and with overall design by distributing changes in each detail to all other affected details.

  3. Detailed Globes Enhance Education and Recreation

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Orbis World Globes creates inflatable globes-Earthballs-in many sizes that depict Earth as it is seen from space, complete with atmospheric cloud cover. Orbis designs and produces the most visually authentic replicas of Earth ever created, and NASA took notice of Orbis globes and employed a 16-inch diameter EarthBall for an educational film it made aboard the STS-45 shuttle mission. Orbis later collaborated with NASA to create two 16-foot diameter world globes for display at the 2002 Olympic Winter Games in Salt Lake City, using more detailed satellite imagery. The satellite image now printed on all Orbis globes displays 1-kilometer resolution and is 21,600 by 43,200 pixels in size, and Orbis globes are otherwise meteorologically accurate, though the cloud cover has been slightly reduced in order for most of the landforms to be visible. Orbis also developed the exclusive NightGlow Cities feature, enabling EarthBalls to display the world's cities as they appear as the Earth revolves from daylight into night. Orbis inflatable globes are available in sizes from 1 to 100 feet in diameter, with the most common being the standard 16-inch and 1-meter diameter EarthBalls. Applications include educational uses from preschools to universities, games, and for a variety of display purposes at conferences, trade shows, festivals, concerts, and parades. A 16-foot diameter Orbis globe was exhibited at the United Nations' World Urban Forum, in Vancouver, Canada; the Space 2006 conference, in San Jose, California; and the X-Prize Cup Personal Spaceflight Exposition in Las Cruces, New Mexico.

  4. Study on detailed geological modelling for fluvial sandstone reservoir in Daqing oil field

    SciTech Connect

    Zhao Hanqing; Fu Zhiguo; Lu Xiaoguang

    1997-08-01

    Guided by the sedimentation theory and knowledge of modern and ancient fluvial deposition and utilizing the abundant information of sedimentary series, microfacies type and petrophysical parameters from well logging curves of close spaced thousands of wells located in a large area. A new method for establishing detailed sedimentation and permeability distribution models for fluvial reservoirs have been developed successfully. This study aimed at the geometry and internal architecture of sandbodies, in accordance to their hierarchical levels of heterogeneity and building up sedimentation and permeability distribution models of fluvial reservoirs, describing the reservoir heterogeneity on the light of the river sedimentary rules. The results and methods obtained in outcrop and modem sedimentation studies have successfully supported the study. Taking advantage of this method, the major producing layers (PI{sub 1-2}), which have been considered as heterogeneous and thick fluvial reservoirs extending widely in lateral are researched in detail. These layers are subdivided into single sedimentary units vertically and the microfacies are identified horizontally. Furthermore, a complex system is recognized according to their hierarchical levels from large to small, meander belt, single channel sandbody, meander scroll, point bar, and lateral accretion bodies of point bar. The achieved results improved the description of areal distribution of point bar sandbodies, provide an accurate and detailed framework model for establishing high resolution predicting model. By using geostatistic technique, it also plays an important role in searching for enriched zone of residual oil distribution.

  5. Cornice Detail of Rake, Cornice Detail of Eave, Wood DoubleHung ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Cornice Detail of Rake, Cornice Detail of Eave, Wood Double-Hung Window Details, Wood Door Details - Boxley Grist Mill, Boxley vicinity on State Route 43, Buffalo National River, Ponca, Newton County, AR

  6. Validation of a fast and accurate chromatographic method for detailed quantification of vitamin E in green leafy vegetables.

    PubMed

    Cruz, Rebeca; Casal, Susana

    2013-11-15

    Vitamin E analysis in green vegetables is performed by an array of different methods, making it difficult to compare published data or choosing the adequate one for a particular sample. Aiming to achieve a consistent method with wide applicability, the current study reports the development and validation of a fast micro-method for quantification of vitamin E in green leafy vegetables. The methodology uses solid-liquid extraction based on the Folch method, with tocol as internal standard, and normal-phase HPLC with fluorescence detection. A large linear working range was confirmed, being highly reproducible, with inter-day precisions below 5% (RSD). Method sensitivity was established (below 0.02 μg/g fresh weight), and accuracy was assessed by recovery tests (>96%). The method was tested in different green leafy vegetables, evidencing diverse tocochromanol profiles, with variable ratios and amounts of α- and γ-tocopherol, and other minor compounds. The methodology is adequate for routine analyses, with a reduced chromatographic run (<7 min) and organic solvent consumption, and requires only standard chromatographic equipment available in most laboratories. PMID:23790900

  7. Nonexposure accurate location K-anonymity algorithm in LBS.

    PubMed

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  8. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  9. Acoustic emission monitoring for assessment of steel bridge details

    SciTech Connect

    Kosnik, D. E.; Corr, D. J.; Hopwood, T.

    2011-06-23

    Acoustic emission (AE) testing was deployed on details of two large steel Interstate Highway bridges: one cantilever through-truss and one trapezoidal box girder bridge. Quantitative measurements of activity levels at known and suspected crack locations were made by monitoring AE under normal service loads (e.g., live traffic and wind). AE indications were used to direct application of radiography, resulting in identification of a previously unknown flaw, and to inform selection of a retrofit detail.

  10. Accurate dynamics in an azimuthally-symmetric accelerating cavity

    NASA Astrophysics Data System (ADS)

    Appleby, R. B.; Abell, D. T.

    2015-02-01

    We consider beam dynamics in azimuthally-symmetric accelerating cavities, using the EMMA FFAG cavity as an example. By fitting a vector potential to the field map, we represent the linear and non-linear dynamics using truncated power series and mixed-variable generating functions. The analysis provides an accurate model for particle trajectories in the cavity, reveals potentially significant and measurable effects on the dynamics, and shows differences between cavity focusing models. The approach provides a unified treatment of transverse and longitudinal motion, and facilitates detailed map-based studies of motion in complex machines like FFAGs.

  11. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  12. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  13. Accurate and efficient spin integration for particle accelerators

    NASA Astrophysics Data System (ADS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  14. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  15. Satellite-based damage mapping following the 2006 Indonesia earthquake—How accurate was it?

    NASA Astrophysics Data System (ADS)

    Kerle, Norman

    2010-12-01

    The Yogyakarta area in Indonesia suffered a devastating earthquake on 27 May 2006. There was an immediate international response, and the International Charter "Space and Major Disasters" was activated, leading to a rapid production of image-based damage maps and other assistance. Most of the acquired images were processed by UNOSAT and DLR-ZKI, while substantial damage mapping also occurred on the ground. This paper assesses the accuracy and completeness of the damage maps produced based on Charter data, using ground damage information collected during an extensive survey by Yogyakarta's Gadjah Mada University in the weeks following the earthquake and that has recently become available. More than 54,000 buildings or their remains were surveyed, resulting in an exceptional validation database. The UNOSAT damage maps outlining clusters of severe damage are very accurate, while earlier, more detailed results underestimated damage and missed larger areas. Damage maps produced by DLR-ZKI, using a damage-grid approach, were found to underestimate the extent and severity of the devastation. Both mapping results also suffer from limited image coverage and extensive cloud contamination. The ground mapping gives a more accurate picture of the extent of the damage, but also illustrates the challenge of mapping a vast area. The paper concludes with a discussion on ways to improve Charter-based damage maps by integration of local knowledge, and to create a wider impact through generation of customised mapping products using web map services.

  16. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    PubMed Central

    Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan

    2015-01-01

    Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681

  17. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture.

    PubMed

    Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan

    2015-01-01

    Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681

  18. 5 CFR 532.411 - Details.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Details. 532.411 Section 532.411 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Pay Administration § 532.411 Details. An appropriated fund employee detailed to a position other than the position...

  19. 49 CFR 176.102 - Supervisory detail.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Supervisory detail. 176.102 Section 176.102... Requirements for Class 1 (Explosive) Materials § 176.102 Supervisory detail. (a) Except as provided in paragraph (c) of this section, the COTP may assign a USCG supervisory detail to any vessel to supervise...

  20. Towards fast and accurate algorithms for processing fuzzy data: interval computations revisited

    NASA Astrophysics Data System (ADS)

    Xiang, Gang; Kreinovich, Vladik

    2013-02-01

    In many practical applications, we need to process data, e.g. to predict the future values of different quantities based on their current values. Often, the only information that we have about the current values comes from experts, and is described in informal ('fuzzy') terms like 'small'. To process such data, it is natural to use fuzzy techniques, techniques specifically designed by Lotfi Zadeh to handle such informal information. In this survey, we start by revisiting the motivation behind Zadeh's formulae for processing fuzzy data, and explain how the algorithmic problem of processing fuzzy data can be described in terms of interval computations (α-cuts). Many fuzzy practitioners claim 'I tried interval computations, they did not work' - meaning that they got estimates which are much wider than the desired α-cuts. We show that such statements are usually based on a (widely spread) misunderstanding - that interval computations simply mean replacing each arithmetic operation with the corresponding operation with intervals. We show that while such straightforward interval techniques indeed often lead to over-wide estimates, the current advanced interval computations techniques result in estimates which are much more accurate. We overview such advanced interval computations techniques, and show that by using them, we can efficiently and accurately process fuzzy data. We wrote this survey with three audiences in mind. First, we want fuzzy researchers and practitioners to understand the current advanced interval computations techniques and to use them to come up with faster and more accurate algorithms for processing fuzzy data. For this 'fuzzy' audience, we explain these current techniques in detail. Second, we also want interval researchers to better understand this important application area for their techniques. For this 'interval' audience, we want to explain where fuzzy techniques come from, what are possible variants of these techniques, and what are the

  1. Developments in remote sensing technology enable more detailed urban flood risk analysis.

    NASA Astrophysics Data System (ADS)

    Denniss, A.; Tewkesbury, A.

    2009-04-01

    Spaceborne remote sensors have been allowing us to build up a profile of planet earth for many years. With each new satellite launched we see the capabilities improve: new bands of data, higher resolution imagery, the ability to derive better elevation information. The combination of this geospatial data to create land cover and usage maps, all help inform catastrophe modelling systems. From Landsat 30m resolution to 2.44m QuickBird multispectral imagery; from 1m radar data collected by TerraSAR-X which enables rapid tracking of the rise and fall of a flood event, and will shortly have a twin satellite launched enabling elevation data creation; we are spoilt for choice in available data. However, just what is cost effective? It is always a question of choosing the appropriate level of input data detail for modelling, depending on the value of the risk. In the summer of 2007, the cost of the flooding in the UK was approximately £3bn and affected over 58,000 homes and businesses. When it comes to flood risk, we have traditionally considered rising river levels and surge tides, but with climate change and variations in our own construction behaviour, there are other factors to be taken into account. During those summer 2007 events, the Environment Agency suggested that around 70% of the properties damaged were the result of pluvial flooding, where high localised rainfall events overload localised drainage infrastructure, causing widespread flooding of properties and infrastructure. To create a risk model that is able to simulate such an event requires much more accurate source data than can be provided from satellite or radar. As these flood events cause considerable damage within relatively small, complex urban environments, therefore new high resolution remote sensing techniques have to be applied to better model these events. Detailed terrain data of England and Wales, plus cities in Scotland, have been produced by combining terrain measurements from the latest

  2. Optoelectronic pH Meter: Further Details

    NASA Technical Reports Server (NTRS)

    Jeevarajan, Antony S.; Anderson, Mejody M.; Macatangay, Ariel V.

    2009-01-01

    A collection of documents provides further detailed information about an optoelectronic instrument that measures the pH of an aqueous cell-culture medium to within 0.1 unit in the range from 6.5 to 7.5. The instrument at an earlier stage of development was reported in Optoelectronic Instrument Monitors pH in a Culture Medium (MSC-23107), NASA Tech Briefs, Vol. 28, No. 9 (September 2004), page 4a. To recapitulate: The instrument includes a quartz cuvette through which the medium flows as it is circulated through a bioreactor. The medium contains some phenol red, which is an organic pH-indicator dye. The cuvette sits between a light source and a photodetector. [The light source in the earlier version comprised red (625 nm) and green (558 nm) light-emitting diodes (LEDs); the light source in the present version comprises a single green- (560 nm)-or-red (623 nm) LED.] The red and green are repeatedly flashed in alternation. The responses of the photodiode to the green and red are processed electronically to obtain the ratio between the amounts of green and red light transmitted through the medium. The optical absorbance of the phenol red in the green light varies as a known function of pH. Hence, the pH of the medium can be calculated from the aforesaid ratio.

  3. Detailed observations of the source of terrestrial narrowband electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.

    1982-01-01

    Detailed observations are presented of a region near the terrestrial plasmapause where narrowband electromagnetic radiation (previously called escaping nonthermal continuum radiation) is being generated. These observations show a direct correspondence between the narrowband radio emissions and electron cyclotron harmonic waves near the upper hybrid resonance frequency. In addition, electromagnetic radiation propagating in the Z-mode is observed in the source region which provides an extremely accurate determination of the electron plasma frequency and, hence, density profile of the source region. The data strongly suggest that electrostatic waves and not Cerenkov radiation are the source of the banded radio emissions and define the coupling which must be described by any viable theory.

  4. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  5. Processing of airborne lidar bathymetry data for detailed sea floor mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael

    2014-10-01

    Airborne bathymetric lidar has proven to be a valuable sensor for rapid and accurate sounding of shallow water areas. With advanced processing of the lidar data, detailed mapping of the sea floor with various objects and vegetation is possible. This mapping capability has a wide range of applications including detection of mine-like objects, mapping marine natural resources, and fish spawning areas, as well as supporting the fulfillment of national and international environmental monitoring directives. Although data sets collected by subsea systems give a high degree of credibility they can benefit from a combination with lidar for surveying and monitoring larger areas. With lidar-based sea floor maps containing information of substrate and attached vegetation, the field investigations become more efficient. Field data collection can be directed into selected areas and even focused to identification of specific targets detected in the lidar map. The purpose of this work is to describe the performance for detection and classification of sea floor objects and vegetation, for the lidar seeing through the water column. With both experimental and simulated data we examine the lidar signal characteristics depending on bottom depth, substrate type, and vegetation. The experimental evaluation is based on lidar data from field documented sites, where field data were taken from underwater video recordings. To be able to accurately extract the information from the received lidar signal, it is necessary to account for the air-water interface and the water medium. The information content is hidden in the lidar depth data, also referred to as point data, and also in the shape of the received lidar waveform. The returned lidar signal is affected by environmental factors such as bottom depth and water turbidity, as well as lidar system factors such as laser beam footprint size and sounding density.

  6. Towards cleaner combustion engines through groundbreaking detailed chemical kinetic models

    PubMed Central

    Battin-Leclerc, Frédérique; Blurock, Edward; Bounaceur, Roda; Fournet, René; Glaude, Pierre-Alexandre; Herbinet, Olivier; Sirjean, Baptiste; Warth, V.

    2013-01-01

    In the context of limiting the environmental impact of transportation, this paper reviews new directions which are being followed in the development of more predictive and more accurate detailed chemical kinetic models for the combustion of fuels. In the first part, the performance of current models, especially in terms of the prediction of pollutant formation, is evaluated. In the next parts, recent methods and ways to improve these models are described. An emphasis is given on the development of detailed models based on elementary reactions, on the production of the related thermochemical and kinetic parameters, and on the experimental techniques available to produce the data necessary to evaluate model predictions under well defined conditions. PMID:21597604

  7. Restructing-the devil is in the details

    SciTech Connect

    Hirst, E.; Kirby, B.

    1995-12-01

    The heated and prolific debates over the future structure and operation of the U.S. electricity industry are long on policy and short on specifics. Ultimately, decisions on industry structure will be based largely on judgement, based in turn on incomplete facts and analysis. Decision makers should at least have the best information one is able to provide them if one expects them to make sound decisions. This article demonstrates that the details and specific implementation plans for proposals are important - perhaps as important as the policy issues themselves. In a few cases, the details are outcome-determining, in that they will help policy makers decide on an overall direction. In many cases, the details can be worked out after overall policies are decided. In almost all cases, however, the details are essential to the creation and operation of an economically efficient, reliable, environmentally benign, and socially equitable U.S. electric system.

  8. 21. DETAIL OF AREA WHERE FIRST FLOOR PASSAGEWAY USED TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. DETAIL OF AREA WHERE FIRST FLOOR PASSAGEWAY USED TO BE SHOWING VERTICAL WOOD MOLDING COVERING JOINT WHERE PARTITION USED TO BE (LEFT), TELLER'S WINDOW LINKING PASSAGEWAY WITH INFORMATION BOOTH (CENTER), AND TYPICAL FURNITURE. VIEW TO EAST. - Boise Project, Boise Project Office, 214 Broadway, Boise, Ada County, ID

  9. 49 CFR 171.16 - Detailed hazardous materials incident reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Detailed hazardous materials incident reports. 171.16 Section 171.16 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS GENERAL INFORMATION, REGULATIONS, AND...

  10. Site Plan, Brief History, Site Elevation, Main Gate Detail, Southern ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Site Plan, Brief History, Site Elevation, Main Gate Detail, Southern Live Oak (Quercus Virginiana) Information - Main Gate and Auburn Oaks at Toomer's Corner, Entrance to Auburn University's Campus, Intersection of West Magnolia Avenue and South College Street, Auburn, Lee County, AL

  11. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  12. Cloud Imagers Offer New Details on Earth's Health

    NASA Technical Reports Server (NTRS)

    2009-01-01

    , limited scientists ability to acquire detailed information about individual particles. Now, experiments with specialized equipment can be flown on standard jets, making it possible for researchers to monitor and more accurately anticipate changes in Earth s atmosphere and weather patterns.

  13. Accurate Development of Thermal Neutron Scattering Cross Section Libraries

    SciTech Connect

    Hawari, Ayman; Dunn, Michael

    2014-06-10

    The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.

  14. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  15. Welcome detailed data, but with a grain of salt: accuracy, precision, uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Di Baldassarre, Giuliano; Todini, Ezio

    2013-04-01

    New survey techniques are providing a huge amount of high-detailed and accurate data which can be extremely valuable for flood inundation modeling. Such data availability raises the issue of how to exploit their information content to provide reliable flood risk mapping and predictions. We think that these data should form the basis of hydraulic modelling anytime they are available. However, high expectations regarding these datasets should be tempered as some important issues should be considered. These include: the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; the computational efficiency; the false confidence that can be given by high-resolution results, as accuracy of results is not necessarily increased by higher precision. We briefly discuss these issues and existing approaches which can be used to manage high detailed data. In our opinion, methods based on sub-grid and roughness upscaling treatments would be in many instances an appropriate solution to maintain consistence with the uncertainty related to model structure and data available for model building and evaluation.

  16. Applications of GPR in Structural Detailing of the Medway Tunnel

    NASA Astrophysics Data System (ADS)

    Alani, Amir M.; Faramarzi, Assad

    2013-04-01

    This investigation focuses on applications of GPR on structural detailing of a major tunnel under the River Medway in north Kent, UK. Construction of the tunnel was completed in 1996 and it carries a substantial volume of traffic between two major areas of Medway. The construction of the tunnel is an "immersed tube" tunnel type that connects a number of segments at immersion joint points. This investigation reports on utilisation of two separate GPR antenna systems at different frequencies in establishing structural details of the tunnel roof at immersion joints. The processed data compiled as a result of this investigation provided much needed information to tunnel engineers for forthcoming maintenance planning purposes. It also provided ample information in confirming rather doubted construction plans originally produced. The reported results are conclusive in terms of construction materials used (information was not originally available and needed confirmation) as well as establishing the required information on the formation of the tunnel roof joints. The presentation is complemented by providing detailed information of a complex process of adopting the GPR systems used in this endeavour.

  17. Efficient and Accurate Indoor Localization Using Landmark Graphs

    NASA Astrophysics Data System (ADS)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  18. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  19. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  20. 18 CFR 401.122 - Supplementary details.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Supplementary details. 401.122 Section 401.122 Conservation of Power and Water Resources DELAWARE RIVER BASIN COMMISSION ADMINISTRATIVE MANUAL RULES OF PRACTICE AND PROCEDURE General Provisions § 401.122 Supplementary details....

  1. 24. 'HANGAR SHEDS ELEVATIONS DETAILS; ARCHITECTURAL PLANS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    24. 'HANGAR SHEDS - ELEVATIONS - DETAILS; ARCHITECTURAL PLANS - PLANT AREA; MODIFICATION CENTER NO. 1, DAGGETT, CALIFORNIA.' Partial elevations, and details of sliding doors and ventilator flaps, as built. Contract no. W509 Eng. 2743; File no. 555/81, revision B, dated April 6, 1943. No sheet number. - Barstow-Daggett Airport, Hangar Shed No. 4, 39500 National Trails Highway, Daggett, San Bernardino County, CA

  2. 5 CFR 317.903 - Details.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., detail means the temporary assignment of an SES member to another position (within or outside of the SES) or the temporary assignment of a non-SES member to an SES position, with the expectation that the... agency may not detail an SES employee to unclassified duties for more than 240 days. (3) An agency...

  3. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  4. Accurate, practical simulation of satellite infrared radiometer spectral data

    SciTech Connect

    Sullivan, T.J.

    1982-09-01

    This study's purpose is to determine whether a relatively simple random band model formulation of atmospheric radiation transfer in the infrared region can provide valid simulations of narrow interval satellite-borne infrared sounder system data. Detailed ozonesondes provide the pertinent atmospheric information and sets of calibrated satellite measurements provide the validation. High resolution line-by-line model calculations are included to complete the evaluation.

  5. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  6. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  7. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  8. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  9. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  10. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  11. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  12. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  13. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  14. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  15. 25. 'HANGAR SHEDS TRUSSES DETAILS; ARCHITECTURAL PLANS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. 'HANGAR SHEDS - TRUSSES - DETAILS; ARCHITECTURAL PLANS - PLANT AREA; MODIFICATION CENTER NO. 1, DAGGETT, CALIFORNIA.' Sections and details of trusses, ironwork, and joints, as modified to show ridge joint detail. As built. This blueline also shows the fire suppression system, added in orange pencil for 'Project 13: Bldgs. T-30, T-50, T-70, T-90' at a later, unspecified date. Contract no. W509 Eng. 2743; File no. 555/84, revision B, dated August 24, 1942. No sheet number. - Barstow-Daggett Airport, Hangar Shed No. 4, 39500 National Trails Highway, Daggett, San Bernardino County, CA

  16. A Review of Research and a Meta-Analysis of the Seductive Detail Effect

    ERIC Educational Resources Information Center

    Rey, Gunter Daniel

    2012-01-01

    Seductive details constitute interesting but irrelevant information that are not necessary to achieve the instructional objective. The seductive detail effect occurs when people learn more deeply from instructional messages that exclude rather than include these details. This effect is mainly explained by assuming an overloading of the working…

  17. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  18. Can blind persons accurately assess body size from the voice?

    PubMed

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  19. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  20. POSITIVE EMOTIONS ENHANCE RECALL OF PERIPHERAL DETAILS

    PubMed Central

    Talarico, Jennifer M.; Berntsen, Dorthe; Rubin, David C.

    2011-01-01

    Emotional arousal and negative affect enhance recall of central aspects of an event. However, the role of discrete emotions in selective memory processing is understudied. Undergraduates were asked to recall and rate autobiographical memories of eight emotional events. Details of each memory were rated as central or peripheral to the event. Significance of the event, vividness, reliving and other aspects of remembering were also rated for each event. Positive affect enhanced recall of peripheral details. Furthermore, the impairment of peripheral recall was greatest in memories of anger, not of fear. Reliving the experience at retrieval was negatively correlated with recall of peripheral details for some emotions (e.g., anger) but not others (e.g., fear), irrespective of similarities in affect and intensity. Within individuals, recall of peripheral details was correlated with less belief in the memory’s accuracy and more likelihood to recall the memory from one’s own eyes (i.e., a field perspective). PMID:21359127

  1. Global detailed geoid computation and model analysis

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Vincent, S.

    1974-01-01

    Comparisons and analyses were carried out through the use of detailed gravimetric geoids which we have computed by combining models with a set of 26,000 1 deg x 1 deg mean free air gravity anomalies. The accuracy of the detailed gravimetric geoid computed using the most recent Goddard earth model (GEM-6) in conjunction with the set of 1 deg x 1 deg mean free air gravity anomalies is assessed at + or - 2 meters on the continents of North America, Europe, and Australia, 2 to 5 meters in the Northeast Pacific and North Atlantic areas, and 5 to 10 meters in other areas where surface gravity data are sparse. The R.M.S. differences between this detailed geoid and the detailed geoids computed using the other satellite gravity fields in conjuction with same set of surface data range from 3 to 7 meters.

  2. Principle of Detailed Balance in Kinetics

    ERIC Educational Resources Information Center

    Alberty, Robert A.

    2004-01-01

    The effects of the detailed balance on chemical kinetics on the chemical monomolecular triangle reactions are illustrated. A simple experiment that illustrates oscillations, limit cycles, bifurcations and noise are illustrated along with the oscillating reactions.

  3. An accurate model potential for alkali neon systems.

    PubMed

    Zanuttini, D; Jacquet, E; Giglio, E; Douady, J; Gervais, B

    2009-12-01

    We present a detailed investigation of the ground and lowest excited states of M-Ne dimers, for M=Li, Na, and K. We show that the potential energy curves of these Van der Waals dimers can be obtained accurately by considering the alkali neon systems as one-electron systems. Following previous authors, the model describes the evolution of the alkali valence electron in the combined potentials of the alkali and neon cores by means of core polarization pseudopotentials. The key parameter for an accurate model is the M(+)-Ne potential energy curve, which was obtained by means of ab initio CCSD(T) calculation using a large basis set. For each MNe dimer, a systematic comparison with ab initio computation of the potential energy curve for the X, A, and B states shows the remarkable accuracy of the model. The vibrational analysis and the comparison with existing experimental data strengthens this conclusion and allows for a precise assignment of the vibrational levels. PMID:19968334

  4. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  5. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  6. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data

    PubMed Central

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  7. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  8. The Devil Is in the Details.

    ERIC Educational Resources Information Center

    Dempsey, William M.

    1997-01-01

    A Rochester Institute of Technology (New York) program costing model designed to reflect costs more accurately allocates indirect costs according to salaries and wages, modified total direct costs, square footage of space used, credit hours, and student and faculty full-time equivalents. It allows administrators to make relative value judgments…

  9. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  10. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  11. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  12. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  13. Are Kohn-Sham conductances accurate?

    PubMed

    Mera, H; Niquet, Y M

    2010-11-19

    We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333

  14. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  15. Balancing detail and scale in assessing transparency to improve the governance of agricultural commodity supply chains

    NASA Astrophysics Data System (ADS)

    Godar, Javier; Suavet, Clément; Gardner, Toby A.; Dawkins, Elena; Meyfroidt, Patrick

    2016-03-01

    To date, assessments of the sustainability of agricultural commodity supply chains have largely relied on some combination of macro-scale footprint accounts, detailed life-cycle analyses and fine-scale traceability systems. Yet these approaches are limited in their ability to support the sustainability governance of agricultural supply chains, whether because they are intended for coarser-grained analyses, do not identify individual actors, or are too costly to be implemented in a consistent manner for an entire region of production. Here we illustrate some of the advantages of a complementary middle-ground approach that balances detail and scale of supply chain transparency information by combining consistent country-wide data on commodity production at the sub-national (e.g. municipal) level with per shipment customs data to describe trade flows of a given commodity covering all companies and production regions within that country. This approach can support supply chain governance in two key ways. First, enhanced spatial resolution of the production regions that connect to individual supply chains allows for a more accurate consideration of geographic variability in measures of risk and performance that are associated with different production practices. Second, identification of key actors that operate within a specific supply chain, including producers, traders, shippers and consumers can help discriminate coalitions of actors that have shared stake in a particular region, and that together are capable of delivering more cost-effective and coordinated interventions. We illustrate the potential of this approach with examples from Brazil, Indonesia and Colombia. We discuss how transparency information can deepen understanding of the environmental and social impacts of commodity production systems, how benefits are distributed among actors, and some of the trade-offs involved in efforts to improve supply chain sustainability. We then discuss the challenges and

  16. Interior building details of Building C, Room C203: detail decorative ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior building details of Building C, Room C-203: detail decorative radiator and four-over-four windows; southwesterly view - San Quentin State Prison, Building 22, Point San Quentin, San Quentin, Marin County, CA

  17. Can radiation therapy treatment planning system accurately predict surface doses in postmastectomy radiation therapy patients?

    SciTech Connect

    Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay

    2012-07-01

    Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.

  18. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  19. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  20. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  1. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  2. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  3. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  4. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  5. Vadose zone transport field study: Detailed test plan for simulated leak tests

    SciTech Connect

    AL Ward; GW Gee

    2000-06-23

    The US Department of Energy (DOE) Groundwater/Vadose Zone Integration Project Science and Technology initiative was created in FY 1999 to reduce the uncertainty associated with vadose zone transport processes beneath waste sites at DOE's Hanford Site near Richland, Washington. This information is needed not only to evaluate the risks from transport, but also to support the adoption of measures for minimizing impacts to the groundwater and surrounding environment. The principal uncertainties in vadose zone transport are the current distribution of source contaminants and the natural heterogeneity of the soil in which the contaminants reside. Oversimplified conceptual models resulting from these uncertainties and limited use of hydrologic characterization and monitoring technologies have hampered the understanding contaminant migration through Hanford's vadose zone. Essential prerequisites for reducing vadose transport uncertainly include the development of accurate conceptual models and the development or adoption of monitoring techniques capable of delineating the current distributions of source contaminants and characterizing natural site heterogeneity. The Vadose Zone Transport Field Study (VZTFS) was conceived as part of the initiative to address the major uncertainties confronting vadose zone fate and transport predictions at the Hanford Site and to overcome the limitations of previous characterization attempts. Pacific Northwest National Laboratory (PNNL) is managing the VZTFS for DOE. The VZTFS will conduct field investigations that will improve the understanding of field-scale transport and lead to the development or identification of efficient and cost-effective characterization methods. Ideally, these methods will capture the extent of contaminant plumes using existing infrastructure (i.e., more than 1,300 steel-cased boreholes). The objectives of the VZTFS are to conduct controlled transport experiments at well-instrumented field sites at Hanford to

  6. Iterative feature refinement for accurate undersampled MR image reconstruction.

    PubMed

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches. PMID:27032527

  7. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  8. An Integrative Method for Accurate Comparative Genome Mapping

    PubMed Central

    Swidan, Firas; Rocha, Eduardo P. C; Shmoish, Michael; Pinter, Ron Y

    2006-01-01

    We present MAGIC, an integrative and accurate method for comparative genome mapping. Our method consists of two phases: preprocessing for identifying “maximal similar segments,” and mapping for clustering and classifying these segments. MAGIC's main novelty lies in its biologically intuitive clustering approach, which aims towards both calculating reorder-free segments and identifying orthologous segments. In the process, MAGIC efficiently handles ambiguities resulting from duplications that occurred before the speciation of the considered organisms from their most recent common ancestor. We demonstrate both MAGIC's robustness and scalability: the former is asserted with respect to its initial input and with respect to its parameters' values. The latter is asserted by applying MAGIC to distantly related organisms and to large genomes. We compare MAGIC to other comparative mapping methods and provide detailed analysis of the differences between them. Our improvements allow a comprehensive study of the diversity of genetic repertoires resulting from large-scale mutations, such as indels and duplications, including explicitly transposable and phagic elements. The strength of our method is demonstrated by detailed statistics computed for each type of these large-scale mutations. MAGIC enabled us to conduct a comprehensive analysis of the different forces shaping prokaryotic genomes from different clades, and to quantify the importance of novel gene content introduced by horizontal gene transfer relative to gene duplication in bacterial genome evolution. We use these results to investigate the breakpoint distribution in several prokaryotic genomes. PMID:16933978

  9. Detailed seafloor habitat mapping to enhance marine-resource management

    USGS Publications Warehouse

    Zawada, David G.; Hart, Kristen M.

    2010-01-01

    Pictures of the seafloor capture important information about the sediments, exposed geologic features, submerged aquatic vegetation, and animals found in a given habitat. With the emergence of marine protected areas (MPAs) as a favored tactic for preserving coral reef resources, knowledge of essential habitat components is paramount to designing effective management strategies. Surprisingly, detailed information on seafloor habitat components is not available in many areas that are being considered for MPA designation or that are already designated as MPAs. A task of the U.S. Geological Survey Coral Reef Ecosystem STudies (USGS CREST) project is addressing this issue.

  10. Memory for Specific Visual Details can be Enhanced by Negative Arousing Content

    ERIC Educational Resources Information Center

    Kensinger, Elizabeth A.; Garoff-Eaton, Rachel J.; Schacter, Daniel L.

    2006-01-01

    Individuals often claim that they vividly remember information with negative emotional content. At least two types of information could lead to this sense of enhanced vividness: Information about the emotional item itself (e.g., the exact visual details of a snake) and information about the context in which the emotional item was encountered…

  11. Detail in architecture: Between arts & crafts

    NASA Astrophysics Data System (ADS)

    Dulencin, Juraj

    2016-06-01

    Architectural detail represents an important part of architecture. Not only can it be used as an identifier of a specific building but at the same time enhances the experience of the realized project. Within it lie the signs of a great architect and clues to understanding his or her way of thinking. It is therefore the central topic of a seminar offered to architecture students at the Brno University of Technology. During the course of the semester-long class the students acquaint themselves with atypical architectural details of domestic and international architects by learning to read them, understand them and subsequently draw them by creating architectural blueprints. In other words, by general analysis of a detail the students learn theoretical thinking of its architect who, depending on the nature of the design, had to incorporate a variety of techniques and crafts. Students apply this analytical part to their own architectural detail design. The methodology of the seminar consists of experiential learning by project management and is complemented by a series of lectures discussing a diversity of details as well as materials and technologies required to implement it. The architectural detail design is also part of students' bachelors thesis, therefore, the realistic nature of their blueprints can be verified in the production process of its physical counterpart. Based on their own documentation the students choose the most suitable manufacturing process whether it is supplied by a specific technology or a craftsman. Students actively participate in the production and correct their design proposals in real scale with the actual material. A student, as a future architect, stands somewhere between a client and an artisan, materializes his or her idea and adjusts the manufacturing process so that the final detail fulfills aesthetic consistency and is in harmony with its initial concept. One of the very important aspects of the design is its economic cost, an

  12. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  13. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  14. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  15. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  16. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  17. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  18. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  19. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  20. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  1. Detailed ultraviolet asymptotics for AdS scalar field perturbations

    NASA Astrophysics Data System (ADS)

    Evnin, Oleg; Jai-akson, Puttarak

    2016-04-01

    We present a range of methods suitable for accurate evaluation of the leading asymptotics for integrals of products of Jacobi polynomials in limits when the degrees of some or all polynomials inside the integral become large. The structures in question have recently emerged in the context of effective descriptions of small amplitude perturbations in anti-de Sitter (AdS) spacetime. The limit of high degree polynomials corresponds in this situation to effective interactions involving extreme short-wavelength modes, whose dynamics is crucial for the turbulent instabilities that determine the ultimate fate of small AdS perturbations. We explicitly apply the relevant asymptotic techniques to the case of a self-interacting probe scalar field in AdS and extract a detailed form of the leading large degree behavior, including closed form analytic expressions for the numerical coefficients appearing in the asymptotics.

  2. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  3. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    NASA Astrophysics Data System (ADS)

    1999-06-01

    the distance to NGC 4258 as either 27 or 29 million light-years, depending on assumptions about the characteristics of this type of star in that galaxy. Other Cepheid-based galaxy distances were used to calculate the expansion rate of the universe, called the Hubble Constant, announced by a team of HST observers last week. "This difference could mean that there may be more uncertainty in Cepheid-determined distances than people have realized," said Moran. "Providing this directly-determined distance to one galaxy -- a distance that can serve as a milestone -- should be helpful in determining distances to other galaxies, and thus the Hubble Constant and the size and age of the universe" The VLBA is a system of ten radio-telescope antennas, each 25 meters (82 feet) in diameter, stretching some 5,000 miles from Mauna Kea in Hawaii to St. Croix in the U.S. Virgin Islands. Operated from NRAO's Array Operations Center in Socorro, NM, the VLBA offers astronomers the greatest resolving power of any telescope anywhere. The NRAO is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. Background information: Determining Cosmic Distances Determining cosmic distances obviously is vital to understanding the size of the universe. In turn, knowing the size of the universe is an important step in determining its age. "The size puts a limit on how much expansion could have occurred since the Big Bang, and thus tells us something about the age," said Moran. However, determining cosmic distances has proven to be a particularly thorny problem for astronomers. In the third century, B.C., the Greek astronomer Aristarchus devised a method of using trigonometry to determine the relative distances of the Moon and Sun, but in practice his method was difficult to use. Though a great first step, he missed the mark by a factor of 20. It wasn't until 1761 that trigonometric methods produced a relatively accurate distance to Venus, thus

  4. Flexible, Fast and Accurate Sequence Alignment Profiling on GPGPU with PaSWAS

    PubMed Central

    Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J. L.; Nap, Jan Peter

    2015-01-01

    Motivation To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. Results With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation. PMID:25830241

  5. Details on the biography of Jerzy Neyman

    NASA Astrophysics Data System (ADS)

    Gaina, Alex

    2003-04-01

    Details on the biography of Jerzy Neyman (1894-1981) and a short outline of the native town Tighina in Basarabia (the Republic of Moldova) of the outstanding mathematician and statistician, astronomer, meteorologist, biologist, philosopher and sociologist, founder of the mathematical theory of selection has been given.

  6. 33 CFR 116.20 - Detailed investigation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., Bridge Administration Program determines that a Detailed Investigation should be conducted, the District Commander will initiate an investigation that addresses all of the pertinent data regarding the bridge... will discuss: the obstructive character of the bridge in question; the impact of that bridge...

  7. States Anxious to Get Details about Stimulus

    ERIC Educational Resources Information Center

    Hoff, David J.

    2009-01-01

    As Congress began debate last week over the size and scope of more than $120 billion in proposed emergency education aid, state leaders were anxiously awaiting the details so they could make specific plans to spend the economic-stimulus money. Governors, state legislators, and state schools chiefs have yet to learn what rules Congress will attach…

  8. Big Heads, Small Details and Autism

    ERIC Educational Resources Information Center

    White, Sarah; O'Reilly, Helen; Frith, Uta

    2009-01-01

    Autism is thought to be associated with a bias towards detail-focussed processing. While the cognitive basis remains controversial, one strong hypothesis is that there are high processing costs associated with changing from local into global processing. A possible neural mechanism underlying this processing style is abnormal neural connectivity;…

  9. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  10. In situ studies on controlling an atomically-accurate formation process of gold nanoclusters

    NASA Astrophysics Data System (ADS)

    Yang, Lina; Cheng, Hao; Jiang, Yong; Huang, Ting; Bao, Jie; Sun, Zhihu; Jiang, Zheng; Ma, Jingyuan; Sun, Fanfei; Liu, Qinghua; Yao, Tao; Deng, Huijuan; Wang, Shuxin; Zhu, Manzhou; Wei, Shiqiang

    2015-08-01

    fragmentation of the initial larger Aun clusters into metastable intermediate Au8-Au13 smaller clusters. This is a critical step, which allows for the secondary size-growth step of the intermediates toward the atomically monodisperse Au13 clusters via incorporating the reactive Au(i)-Cl species in the solution. Such a secondary-growth pathway is further confirmed by the successful growth of Au13 through reaction of isolated Au11 clusters with AuClPPh3 in the HCl environment. This work addresses the importance of reaction intermediates in guiding the way towards controllable synthesis of metal nanoclusters. Electronic supplementary information (ESI) available: Synthesis and characterization of the starting and end Au nanoclusters, assignment of the MALDI-MS peaks, details for the EXAFS curve-fitting and fitting results, parallel experiments using sulfuric acid and acetic acid as etchants, and experimental details for growing isolated Au11 into Au13 clusters in the HCl environment. See DOI: 10.1039/c5nr03711e

  11. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  12. Molecular adsorption at Pt(111). How accurate are DFT functionals?

    PubMed

    Gautier, Sarah; Steinmann, Stephan N; Michel, Carine; Fleurat-Lessard, Paul; Sautet, Philippe

    2015-11-21

    Molecular chemisorption at a metal surface is a key step for many processes, such as catalysis, electrochemistry, surface treatment, tribology and friction. Modeling with density functional theory is largely used on these systems. From a detailed comparison with accurate micro-calorimetric data on ten systems (involving ethylene, cyclohexene, benzene, naphthalene, CO, O2, H2, methane, ethane), we study the accuracy, for chemisorption on Pt(111), of five exchange-correlation functionals including one generalized gradient approximation functional (PBE) and four functionals that take into account van der Waals interactions (optPBE-vdW, optB86b-vdW, BEEF-vdW, PBE-dDsC). If the functionals used provide very similar geometries and electronic structures, as shown by projected density of states, they give strikingly different results for the adsorption energy of molecules on Pt(111). Among the set of chemisorption data, the lowest mean absolute deviations (MAD) are obtained with the optPBE-vdW and PBE-dDsC functionals (∼0.2 eV) while PBE and optB86b-vdW give twice larger MAD (∼0.45 eV). BEEF-vdW is intermediate with a MAD of 0.33 eV. For laterally π-bound unsaturated hydrocarbons (cyclohexene, benzene, naphthalene) the PBE and the BEEF-vdW functionals are severally under-bound, while optPBE-vdW and PBE-dDsC provide a good match with experiments. Hence both the incorporation of van der Waals dispersive forces and the choice of the exchange functional have a key influence on the chemisorption energy. Vertically bound ethylidyne and CO are in contrast over-bound with all functionals, the best agreement being obtained with BEEF-vdW. None of the selected functionals hence provides a universally accurate treatment of chemisorption energies. PMID:26455444

  13. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  14. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  15. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  16. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  17. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  18. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  19. Downscaling NASA Climatological Data to Produce Detailed Climate Zone Maps

    NASA Technical Reports Server (NTRS)

    Chandler, William S.; Hoell, James M.; Westberg, David J.; Whitlock, Charles H.; Zhang, Taiping; Stackhouse, P. W.

    2011-01-01

    The design of energy efficient sustainable buildings is heavily dependent on accurate long-term and near real-time local weather data. To varying degrees the current meteorological networks over the globe have been used to provide these data albeit often from sites far removed from the desired location. The national need is for access to weather and solar resource data accurate enough to use to develop preliminary building designs within a short proposal time limit, usually within 60 days. The NASA Prediction Of Worldwide Energy Resource (POWER) project was established by NASA to provide industry friendly access to globally distributed solar and meteorological data. As a result, the POWER web site (power.larc.nasa.gov) now provides global information on many renewable energy parameters and several buildings-related items but at a relatively coarse resolution. This paper describes a method of downscaling NASA atmospheric assimilation model results to higher resolution and maps those parameters to produce building climate zone maps using estimates of temperature and precipitation. The distribution of climate zones for North America with an emphasis on the Pacific Northwest for just one year shows very good correspondence to the currently defined distribution. The method has the potential to provide a consistent procedure for deriving climate zone information on a global basis that can be assessed for variability and updated more regularly.

  20. Detail enhancement of blurred infrared images based on frequency extrapolation

    NASA Astrophysics Data System (ADS)

    Xu, Fuyuan; Zeng, Deguo; Zhang, Jun; Zheng, Ziyang; Wei, Fei; Wang, Tiedan

    2016-05-01

    A novel algorithm for enhancing the details of the blurred infrared images based on frequency extrapolation has been raised in this paper. Unlike other researchers' work, this algorithm mainly focuses on how to predict the higher frequency information based on the Laplacian pyramid separation of the blurred image. This algorithm uses the first level of the high frequency component of the pyramid of the blurred image to reverse-generate a higher, non-existing frequency component, and adds back to the histogram equalized input blurred image. A simple nonlinear operator is used to analyze the extracted first level high frequency component of the pyramid. Two critical parameters are participated in the calculation known as the clipping parameter C and the scaling parameter S. The detailed analysis of how these two parameters work during the procedure is figure demonstrated in this paper. The blurred image will become clear, and the detail will be enhanced due to the added higher frequency information. This algorithm has the advantages of computational simplicity and great performance, and it can definitely be deployed in the real-time industrial applications. We have done lots of experiments and gave illustrations of the algorithm's performance in this paper to convince its effectiveness.

  1. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  2. Accurate and efficient reconstruction of deep phylogenies from structured RNAs

    PubMed Central

    Stocsits, Roman R.; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F.

    2009-01-01

    Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687

  3. Accurate and efficient reconstruction of deep phylogenies from structured RNAs.

    PubMed

    Stocsits, Roman R; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F

    2009-10-01

    Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687

  4. Mouse models of human AML accurately predict chemotherapy response.

    PubMed

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W

    2009-04-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  5. Accurate eye center location through invariant isocentric patterns.

    PubMed

    Valenti, Roberto; Gevers, Theo

    2012-09-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery. PMID:22813958

  6. Bolivia-Brazil gas line route detailed

    SciTech Connect

    Not Available

    1992-05-11

    This paper reports that state oil companies of Brazil and Bolivia have signed an agreement outlining the route for a 2,270 km pipeline system to deliver natural gas from Bolivian fields to Southeast Brazil. The two sides currently are negotiating details about construction costs as well as contract volumes and prices. Capacity is projected at 283-565 MMcfd. No official details are available, but Roberto Y. Hukai, a director of the Sao Paulo engineering company Jaako Poyry/Technoplan, estimates transportation cost of the Bolivian gas at 90 cents/MMBTU. That would be competitive with the price of gas delivered to the Sao Paulo gas utility Comgas, he the. Brazil's Petroleos Brasileiro SA estimates construction of the pipeline on the Brazilian side alone with cost $1.2-1.4 billion. Bolivia's Yacimientos Petroliferos Fiscales Bolivianos (YPFB) is negotiating with private domestic and foreign investors for construction of the Bolivian portion of the project.

  7. Langevin dynamics neglecting detailed balance condition.

    PubMed

    Ohzeki, Masayuki; Ichiki, Akihisa

    2015-07-01

    An improved method for driving a system into a desired distribution, for example, the Gibbs-Boltzmann distribution, is proposed, which makes use of an artificial relaxation process. The standard techniques for achieving the Gibbs-Boltzmann distribution involve numerical simulations under the detailed balance condition. In contrast, in the present study we formulate the Langevin dynamics, for which the corresponding Fokker-Planck operator includes an asymmetric component violating the detailed balance condition. This leads to shifts in the eigenvalues and results in the acceleration of the relaxation toward the steady state. The numerical implementation demonstrates faster convergence and shorter correlation time, and the technique of biased event sampling, Nemoto-Sasa theory, further highlights the efficacy of our method. PMID:26274123

  8. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography

    PubMed Central

    Haley, William E.; Ibrahim, El-Sayed H.; Qu, Mingliang; Cernigliaro, Joseph G.; Goldfarb, David S.; McCollough, Cynthia H.

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  9. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  10. Easy Accurate Transfer of the Sculpted Soft Tissue Contours to the Working Cast: A Clinical Tip.

    PubMed

    Jambhekar, Shantanu S; Kheur, Mohit G; Matani, Jay; Sethi, Sumit

    2014-12-01

    Tooth replacement in the esthetic zone presents a myriad of challenges for the clinician. An ovate pontic accurately duplicates the emergence profile of the natural tooth it replaces in order to provide an esthetic, yet cleansable prosthesis. The accurate transfer of this sculpted tissue beneath the pontic of the provisional restoration is critical to provide the dental laboratory technician with the necessary information to fabricate a definitive restoration with an appropriate emergence profile. This article presents an innovative, simple and convenient impression technique for easy and accurate transfer of the tissue contours to the working cast, avoiding tissue collapse and tissue compression produced due to the impression material. PMID:26199543

  11. Detailed Jet Dynamics in a Collapsing Bubble

    NASA Astrophysics Data System (ADS)

    Supponen, Outi; Obreschkow, Danail; Kobel, Philippe; Farhat, Mohamed

    2015-12-01

    We present detailed visualizations of the micro-jet forming inside an aspherically collapsing cavitation bubble near a free surface. The high-quality visualizations of large and strongly deformed bubbles disclose so far unseen features of the dynamics inside the bubble, such as a mushroom-like flattened jet-tip, crown formation and micro-droplets. We also find that jetting near a free surface reduces the collapse time relative to the Rayleigh time.

  12. Detailed scour measurements around a debris accumulation

    USGS Publications Warehouse

    Mueller, David S.; Parola, Arthur C.

    1998-01-01

    Detailed scour measurements were made at Farm-Market 2004 over the Brazos River near Lake Jackson, Tex. during flooding in October 1994. Woody debris accumulations on bents 6, 7, and 8 obstructed flow through the bridge, causing scour of the streambed. Measurements at the site included three-dimensional velocities, channel bathymetry, water-surface elevations, water-surface slope, and discharge. Channel geometry upstream from the bridge caused approach conditions to be nonuniform.

  13. Structural concepts and details for seismic design

    SciTech Connect

    Not Available

    1991-09-01

    This manual discusses building and building component behavior during earthquakes, and provides suggested details for seismic resistance which have shown by experience to provide adequate performance during earthquakes. Special design and construction practices are also described which, although they might be common in some high-seismic regions, may not be common in low and moderate seismic-hazard regions of the United States. Special attention is given to describing the level of detailing appropriate for each seismic region. The UBC seismic criteria for all seismic zones is carefully examined, and many examples of connection details are given. The general scope of discussion is limited to materials and construction types common to Department of Energy (DOE) sites. Although the manual is primarily written for professional engineers engaged in performing seismic-resistant design for DOE facilities, the first two chapters, plus the introductory sections of succeeding chapters, contain descriptions which are also directed toward project engineers who authorize, review, or supervise the design and construction of DOE facilities. 88 refs., 188 figs.

  14. Accurate Measurement of Organic Solar Cell Efficiency

    SciTech Connect

    Emery, K.; Moriarty, T.

    2008-01-01

    We discuss the measurement and analysis of current vs. voltage (I-V) characteristics of organic and dye-sensitized photovoltaic cells and modules. A brief discussion of the history of photovoltaic efficiency measurements and procedures will be presented. We discuss both the error sources in the measurements and the strategies to minimize their influence. These error sources include the sample area, spectral errors, temperature fluctuations, current and voltage response time, contacting, and degradation during testing. Information that can be extracted from light and dark I-V measurement includes peak power, open-circuit voltage, short-circuit current, series and shunt resistance, diode quality factor, dark current, and photo-current. The quantum efficiency provides information on photo-current nonlinearities, current generation, and recombination mechanisms.

  15. Capsule-odometer: A concept to improve accurate lesion localisation

    PubMed Central

    Karargyris, Alexandros; Koulaouzidis, Anastasios

    2013-01-01

    In order to improve lesion localisation in small-bowel capsule endoscopy, a modified capsule design has been proposed incorporating localisation and - in theory - stabilization capabilities. The proposed design consists of a capsule fitted with protruding wheels attached to a spring-mechanism. This would act as a miniature odometer, leading to more accurate lesion localization information in relation to the onset of the investigation (spring expansion e.g., pyloric opening). Furthermore, this capsule could allow stabilization of the recorded video as any erratic, non-forward movement through the gut is minimised. Three-dimensional (3-D) printing technology was used to build a capsule prototype. Thereafter, miniature wheels were also 3-D printed and mounted on a spring which was attached to conventional capsule endoscopes for the purpose of this proof-of-concept experiment. In vitro and ex vivo experiments with porcine small-bowel are presented herein. Further experiments have been scheduled. PMID:24124345

  16. Uncertainty partition challenges the predictability of vital details of climate change

    NASA Astrophysics Data System (ADS)

    Fatichi, Simone; Ivanov, Valeriy Y.; Paschalis, Athanasios; Peleg, Nadav; Molnar, Peter; Rimkus, Stefan; Kim, Jongho; Burlando, Paolo; Caporali, Enrica

    2016-05-01

    Decision makers and consultants are particularly interested in "detailed" information on future climate to prepare adaptation strategies and adjust design criteria. Projections of future climate at local spatial scales and fine temporal resolutions are subject to the same uncertainties as those at the global scale but the partition among uncertainty sources (emission scenarios, climate models, and internal climate variability) remains largely unquantified. At the local scale, the uncertainty of the mean and extremes of precipitation is shown to be irreducible for mid and end-of-century projections because it is almost entirely caused by internal climate variability (stochasticity). Conversely, projected changes in mean air temperature and other meteorological variables can be largely constrained, even at local scales, if more accurate emission scenarios can be developed. The results were obtained by applying a comprehensive stochastic downscaling technique to climate model outputs for three exemplary locations. In contrast with earlier studies, the three sources of uncertainty are considered as dependent and, therefore, non-additive. The evidence of the predominant role of internal climate variability leaves little room for uncertainty reduction in precipitation projections; however, the inference is not necessarily negative, because the uncertainty of historic observations is almost as large as that for future projections with direct implications for climate change adaptation measures.

  17. Detailed close-ups and the big picture of spliceosomes

    PubMed Central

    Jurica, Melissa S.

    2008-01-01

    Summary The spliceosome is the huge macromolecular assembly responsible for the removal of introns from pre-mRNA transcripts. The size and complexity of this dynamic cellular machine dictates that structural analysis of the spliceosome is best served by a combination of techniques. Electron microscopy is providing a more global, albeit less detailed, view of spliceosome assemblies. X-ray crystallographers and NMR spectroscopists are steadily reporting more atomic resolution structures of individual spliceosome components and fragments. Increasingly, structures of these individual pieces in complex with binding partners are yielding insights into the interfaces that hold the entire spliceosome assembly together. Although the information arising from the various structural studies of splicing machinery has not yet fully converged into a complete model, we can expect that a detailed understanding of spliceosome structure will arise at the juncture of structural and computational modeling methods. PMID:18550358

  18. Exploring Architectural Details Through a Wearable Egocentric Vision Device.

    PubMed

    Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita

    2016-01-01

    Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience. PMID:26901197

  19. Exploring Architectural Details Through a Wearable Egocentric Vision Device

    PubMed Central

    Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita

    2016-01-01

    Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience. PMID:26901197

  20. Using ecological zones to increase the detail of Landsat classifications

    NASA Technical Reports Server (NTRS)

    Fox, L., III; Mayer, K. E.

    1981-01-01

    Changes in classification detail of forest species descriptions were made for Landsat data on 2.2 million acres in northwestern California. Because basic forest canopy structures may exhibit very similar E-M energy reflectance patterns in different environmental regions, classification labels based on Landsat spectral signatures alone become very generalized when mapping large heterogeneous ecological regions. By adding a seven ecological zone stratification, a 167% improvement in classification detail was made over the results achieved without it. The seven zone stratification is a less costly alternative to the inclusion of complex collateral information, such as terrain data and soil type, into the Landsat data base when making inventories of areas greater than 500,000 acres.

  1. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  2. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  3. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962

  4. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  5. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  6. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  7. CT-Analyst: fast and accurate CBR emergency assessment

    NASA Astrophysics Data System (ADS)

    Boris, Jay; Fulton, Jack E., Jr.; Obenschain, Keith; Patnaik, Gopal; Young, Theodore, Jr.

    2004-08-01

    An urban-oriented emergency assessment system for airborne Chemical, Biological, and Radiological (CBR) threats, called CT-Analyst and based on new principles, gives greater accuracy and much greater speed than possible with current alternatives. This paper explains how this has been done. The increased accuracy derives from detailed, three-dimensional CFD computations including, solar heating, buoyancy, complete building geometry specification, trees, wind fluctuations, and particle and droplet distributions (as appropriate). This paper shows how a very finite number of such computations for a given area can be extended to all wind directions and speeds, and all likely sources and source locations using a new data structure called Dispersion Nomographs. Finally, we demonstrate a portable, entirely graphical software tool called CT-Analyst that embodies this entirely new, high-resolution technology and runs effectively on small personal computers. Real-time users don't have to wait for results because accurate answers are available with near zero-latency (that is 10 - 20 scenarios per second). Entire sequences of cases (e.g. a continuously changing source location or wind direction) can be computed and displayed as continuous-action movies. Since the underlying database has been precomputed, the door is wide open for important new real-time, zero-latency functions such as sensor data fusion, backtracking to an unknown source location, and even evacuation route planning. Extensions of the technology to sensor location optimization, buildings, tunnels, and integration with other advanced technologies, e.g. micrometeorology or detailed wind field measurements, will be discussed briefly here.

  8. Higher Education in France: A Handbook of Information Concerning Fields of Study in Each Institution. Bulletin, 1952, No. 6

    ERIC Educational Resources Information Center

    Kahler, Edith

    1952-01-01

    Advising students who wish to study in other countries is often difficult because accurate, up-to-date, and detailed information about the offerings in their higher institutions is frequently unavailable. A student wishing to study in a given country needs to know what the several institutions offer not only in his own subject area but also in…

  9. Revisiting the Seductive Details Effect in Multimedia Learning: Context-Dependency of Seductive Details

    ERIC Educational Resources Information Center

    Ozdemir, Devrim; Doolittle, Peter

    2015-01-01

    The purpose of this study was to investigate the effects of context-dependency of seductive details on recall and transfer in multimedia learning environments. Seductive details were interesting yet irrelevant sentences in the instructional text. Two experiments were conducted. The purpose of Experiment 1 was to identify context-dependent and…

  10. Accurate molecular classification of cancer using simple rules

    PubMed Central

    Wang, Xiaosheng; Gotoh, Osamu

    2009-01-01

    Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV) of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML]), lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML). Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction. PMID:19874631

  11. How accurate are Scottish cancer registration data?

    PubMed Central

    Brewster, D.; Crichton, J.; Muir, C.

    1994-01-01

    In order to assess the accuracy of Scottish cancer registration data, a random sample of 2,200 registrations, attributed to the year 1990, was generated. Relevant medical records were available for review in 2,021 (92%) cases. Registration details were reabstracted from available records and compared with data in the registry. Discrepancies in identifying items of data (surname, forename, sex and date of birth) were found in 3.5% of cases. Most were trivial and would not disturb record linkage. Discrepancy rates of 7.1% in post code of residence at the time of diagnosis (excluding differences arising through boundary changes), 11.0% in anniversary date (excluding differences of 6 weeks or less), 7.7% in histological verification status, 5.4% in ICD-9 site codes (the first three digits) and 14.5% in ICD-O morphology codes (excluding 'inferred' morphology codes) were recorded. Overall, serious discrepancies were judged to have occurred in 2.8% of cases. In many respects, therefore, Scottish cancer registration data show a high level of accuracy that compares favourably to the reported accuracy of the few other cancer registries undertaking such analyses. PMID:7947104

  12. Capabilities and limitations of detailed hillslope hydrological modelling

    NASA Astrophysics Data System (ADS)

    Bronstert, Axel

    1999-01-01

    Hillslope hydrological modelling is considered to be of great importance for the understanding and quantification of hydrological processes in hilly or mountainous landscapes. In recent years a few comprehensive hydrological models have been developed at the hillslope scale which have resulted in an advanced representation of hillslope hydrological processes (including their interactions), and in some operational applications, such as in runoff and erosion studies at the field scale or lateral flow simulation in environmental and geotechnical engineering. An overview of the objectives of hillslope hydrological modelling is given, followed by a brief introduction of an exemplary comprehensive hillslope model, which stimulates a series of hydrological processes such as interception, evapotranspiration, infiltration into the soil matrix and into macropores, lateral and vertical subsurface soil water flow both in the matrix and preferential flow paths, surface runoff and channel discharge. Several examples of this model are presented and discussed in order to determine the model's capabilities and limitations. Finally, conclusions about the limitations of detailed hillslope modelling are drawn and an outlook on the future prospects of hydrological models on the hillslope scale is given.The model presented performed reasonable calculations of Hortonian surface runoff and subsequent erosion processes, given detailed information of initial soil water content and soil hydraulic conditions. The vertical and lateral soil moisture dynamics were also represented quite well. However, the given examples of model applications show that quite detailed climatic and soil data are required to obtain satisfactory results. The limitations of detailed hillslope hydrological modelling arise from different points: difficulties in the representations of certain processes (e.g. surface crusting, unsaturated-saturated soil moisture flow, macropore flow), problems of small-scale variability

  13. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  14. Violation of detailed balance accelerates relaxation

    NASA Astrophysics Data System (ADS)

    Ichiki, Akihisa; Ohzeki, Masayuki

    2013-08-01

    Recent studies have experienced the acceleration of convergence in Markov chain Monte Carlo methods implemented by the systems without detailed balance condition (DBC). However, such advantage of the violation of DBC has not been confirmed in general. We investigate the effect of the absence of DBC on the convergence toward equilibrium. Surprisingly, it is shown that the DBC violation always makes the relaxation faster. Our result implies the existence of a kind of thermodynamic inequality that connects the nonequilibrium process relaxing toward steady state with the relaxation process which has the same probability distribution as its equilibrium state.

  15. A detailed phylogeny for the Methanomicrobiales

    NASA Technical Reports Server (NTRS)

    Rouviere, P.; Mandelco, L.; Winker, S.; Woese, C. R.

    1992-01-01

    The small subunit rRNA sequence of twenty archaea, members of the Methanomicrobiales, permits a detailed phylogenetic tree to be inferred for the group. The tree confirms earlier studies, based on far fewer sequences, in showing the group to be divided into two major clusters, temporarily designated the "methanosarcina" group and the "methanogenium" group. The tree also defines phylogenetic relationships within these two groups, which in some cases do not agree with the phylogenetic relationships implied by current taxonomic names--a problem most acute for the genus Methanogenium and its relatives. The present phylogenetic characterization provides the basis for a consistent taxonomic restructuring of this major methanogenic taxon.

  16. Instrumentation for detailed bridge-scour measurements

    USGS Publications Warehouse

    Landers, Mark N.; Mueller, David S.; Trent, Roy E.

    1993-01-01

    A portable instrumentation system is being developed to obtain channel bathymetry during floods for detailed bridge-scour measurements. Portable scour measuring systems have four components: sounding instrument, horizontal positioning instrument, deployment mechanisms, and data storage device. The sounding instrument will be a digital fathometer. Horizontal position will be measured using a range-azimuth based hydrographic survey system. The deployment mechanism designed for this system is a remote-controlled boat using a small waterplane area, twin-hull design. An on-board computer and radio will monitor the vessel instrumentation, record measured data, and telemeter data to shore.

  17. Details of extensive movements by Minnesota wolves

    USGS Publications Warehouse

    Merrill, S.B.; Mech, L.D.

    2000-01-01

    We used VHF, GPS, and satellite radiocollars to study details of long distance movements by four Minnesota wolves (Canis lupus). Number of locations during our tracking ranged from 14 to 274. Farthest distances reached ranged from 183-494 km, and minimum distances traveled (sums of line segments) ranged from 490-4251 km. Numbers of times wolves crossed state, provincial or interstate highways ranged from 1 to 215. All four of the wolves returned to or near their natal territories after up to 179 d and at least two left again.

  18. Accurate calculation of field and carrier distributions in doped semiconductors

    NASA Astrophysics Data System (ADS)

    Yang, Wenji; Tang, Jianping; Yu, Hongchun; Wang, Yanguo

    2012-06-01

    We use the numerical squeezing algorithm(NSA) combined with the shooting method to accurately calculate the built-in fields and carrier distributions in doped silicon films (SFs) in the micron and sub-micron thickness range and results are presented in graphical form for variety of doping profiles under different boundary conditions. As a complementary approach, we also present the methods and the results of the inverse problem (IVP) - finding out the doping profile in the SFs for given field distribution. The solution of the IVP provides us the approach to arbitrarily design field distribution in SFs - which is very important for low dimensional (LD) systems and device designing. Further more, the solution of the IVP is both direct and much easy for all the one-, two-, and three-dimensional semiconductor systems. With current efforts focused on the LD physics, knowing of the field and carrier distribution details in the LD systems will facilitate further researches on other aspects and hence the current work provides a platform for those researches.

  19. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  20. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data. PMID:26355331

  1. Automatically Generated, Anatomically Accurate Meshes for Cardiac Electrophysiology Problems

    PubMed Central

    Prassl, Anton J.; Kickinger, Ferdinand; Ahammer, Helmut; Grau, Vicente; Schneider, Jürgen E.; Hofer, Ernst; Vigmond, Edward J.; Trayanova, Natalia A.

    2010-01-01

    Significant advancements in imaging technology and the dramatic increase in computer power over the last few years broke the ground for the construction of anatomically realistic models of the heart at an unprecedented level of detail. To effectively make use of high-resolution imaging datasets for modeling purposes, the imaged objects have to be discretized. This procedure is trivial for structured grids. However, to develop generally applicable heart models, unstructured grids are much preferable. In this study, a novel image-based unstructured mesh generation technique is proposed. It uses the dual mesh of an octree applied directly to segmented 3-D image stacks. The method produces conformal, boundary-fitted, and hexahedra-dominant meshes. The algorithm operates fully automatically with no requirements for interactivity and generates accurate volume-preserving representations of arbitrarily complex geometries with smooth surfaces. The method is very well suited for cardiac electrophysiological simulations. In the myocardium, the algorithm minimizes variations in element size, whereas in the surrounding medium, the element size is grown larger with the distance to the myocardial surfaces to reduce the computational burden. The numerical feasibility of the approach is demonstrated by discretizing and solving the monodomain and bidomain equations on the generated grids for two preparations of high experimental relevance, a left ventricular wedge preparation, and a papillary muscle. PMID:19203877

  2. Super Resolution Reconstruction Based on Adaptive Detail Enhancement for ZY-3 Satellite Images

    NASA Astrophysics Data System (ADS)

    Zhu, Hong; Song, Weidong; Tan, Hai; Wang, Jingxue; Jia, Di

    2016-06-01

    Super-resolution reconstruction of sequence remote sensing image is a technology which handles multiple low-resolution satellite remote sensing images with complementary information and obtains one or more high resolution images. The cores of the technology are high precision matching between images and high detail information extraction and fusion. In this paper puts forward a new image super resolution model frame which can adaptive multi-scale enhance the details of reconstructed image. First, the sequence images were decomposed into a detail layer containing the detail information and a smooth layer containing the large scale edge information by bilateral filter. Then, a texture detail enhancement function was constructed to promote the magnitude of the medium and small details. Next, the non-redundant information of the super reconstruction was obtained by differential processing of the detail layer, and the initial super resolution construction result was achieved by interpolating fusion of non-redundant information and the smooth layer. At last, the final reconstruction image was acquired by executing a local optimization model on the initial constructed image. Experiments on ZY-3 satellite images of same phase and different phase show that the proposed method can both improve the information entropy and the image details evaluation standard comparing with the interpolation method, traditional TV algorithm and MAP algorithm, which indicate that our method can obviously highlight image details and contains more ground texture information. A large number of experiment results reveal that the proposed method is robust and universal for different kinds of ZY-3 satellite images.

  3. Chord, Tie Bar & Crossbracing Joint Detail in Plan; Crossbracing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Chord, Tie Bar & Crossbracing Joint Detail in Plan; Crossbracing Center Joint Detail in Plan; Chord Joining Detail in Plan & Elevation; Chord, Panel Post, Tie Bar, & Diagonal Brace Joint Detail; Crossbracing Center Joint Detail in Section; Chord, Panel Post, Tie Bar & Horizontal Brace Joint Detail - Narrows Bridge, Spanning Sugar Creek at Old County Road 280 East, Marshall, Parke County, IN

  4. [Teacher Referral Information and Statistical Information Forms.

    ERIC Educational Resources Information Center

    Short, N. J.

    This rating information form used to refer children to the PIC program, elicits information concerning the child's emotional, cognitive, and personality development. See TM 001 111 for details of the program in which it is used. (DLG)

  5. Detailed balance of the Feynman micromotor

    NASA Astrophysics Data System (ADS)

    Abbott, Derek; Davis, Bruce R.; Parrondo, Juan M. R.

    1999-09-01

    One existing implication of micromotors is that they can be powered by rectifying non-equilibrium thermal fluctuations or mechanical vibrations via the so-called Feynman- micromotor. An example of mechanical rectification is found in the batteryless wristwatch. The original concept was described in as early as 1912 by Smoluchowski and was later revisited in 1963 by Feynman, in the context of rectifying thermal fluctuations to obtain useful motion. It has been shown that, although rectification is impossible at equilibrium, it is possible for the Feynman-micromotor to perform work under non-equilibrium conditions. These concepts can now be realized by MEMS technology and may have exciting implications in biomedicine - where the Feynman- micromotor can be used to power a smart pill, for example. Previously, Feynman's analysis of the motor's efficiency has been shown to be flawed by Parrondo and Espanol. We now show there are further problems in Feynman's treatment of detailed balance. In order to design and understand this device correctly, the equations of detailed balance must be found. Feynman's approach was to use probabilities based on energies and we show that this is problematic. In this paper, we demonstrate corrected equations using level crossing probabilities instead. A potential application of the Feynman-micromotor is a batteryless nanopump that consists of a small MEMS chip that adheres to the skin of a patient and dispense nanoliter quantities of medication. Either mechanical or thermal rectification via a Feynman- micromotor, as the power source, is open for possible investigation.

  6. HUBBLE CAPTURES DETAILED IMAGE OF URANUS' ATMOSPHERE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Hubble Space Telescope has peered deep into Uranus' atmosphere to see clear and hazy layers created by a mixture of gases. Using infrared filters, Hubble captured detailed features of three layers of Uranus' atmosphere. Hubble's images are different from the ones taken by the Voyager 2 spacecraft, which flew by Uranus 10 years ago. Those images - not taken in infrared light - showed a greenish-blue disk with very little detail. The infrared image allows astronomers to probe the structure of Uranus' atmosphere, which consists of mostly hydrogen with traces of methane. The red around the planet's edge represents a very thin haze at a high altitude. The haze is so thin that it can only be seen by looking at the edges of the disk, and is similar to looking at the edge of a soap bubble. The yellow near the bottom of Uranus is another hazy layer. The deepest layer, the blue near the top of Uranus, shows a clearer atmosphere. Image processing has been used to brighten the rings around Uranus so that astronomers can study their structure. In reality, the rings are as dark as black lava or charcoal. This false color picture was assembled from several exposures taken July 3, 1995 by the Wide Field Planetary Camera-2. CREDIT: Erich Karkoschka (University of Arizona Lunar and Planetary Lab) and NASA

  7. Detailed Chemical Kinetic Modeling of Hydrazine Decomposition

    NASA Technical Reports Server (NTRS)

    Meagher, Nancy E.; Bates, Kami R.

    2000-01-01

    The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.

  8. Thirty Meter Telescope Detailed Science Case: 2015

    NASA Astrophysics Data System (ADS)

    Skidmore, Warren; TMT International Science Development Teams; Science Advisory Committee, TMT

    2015-12-01

    The TMT Detailed Science Case describes the transformational science that the Thirty Meter Telescope will enable. Planned to begin science operations in 2024, TMT will open up opportunities for revolutionary discoveries in essentially every field of astronomy, astrophysics and cosmology, seeing much fainter objects much more clearly than existing telescopes. Per this capability, TMT's science agenda fills all of space and time, from nearby comets and asteroids, to exoplanets, to the most distant galaxies, and all the way back to the very first sources of light in the universe. More than 150 astronomers from within the TMT partnership and beyond offered input in compiling the new 2015 Detailed Science Case. The contributing astronomers represent the entire TMT partnership, including the California Institute of Technology (Caltech), the Indian Institute of Astrophysics (IIA), the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), the National Astronomical Observatory of Japan (NAOJ), the University of California, the Association of Canadian Universities for Research in Astronomy (ACURA) and US associate partner, the Association of Universities for Research in Astronomy (AURA). Cover image: artist's rendition of the TMT International Observatory on Mauna Kea opening in the late evening before beginning operations.

  9. An exposure-response database for detailed toxicity data.

    PubMed

    Woodall, George M

    2008-11-15

    Risk assessment for human health effects often depends on evaluation of toxicological literature from a variety of sources. Risk assessors have limited resources for obtaining raw data, performing follow-on analyses or initiating new studies. These constraints must be balanced against a need to improve scientific credibility through improved statistical and analytical methods that optimize the use of available information. Computerized databases are used in toxicological risk assessment both for storing data and performing predictive analyses. Many systems provide primarily either bibliographic information or summary factual data from toxicological studies; few provide adequate information to allow application of dose-response models. The Exposure-Response database (ERDB) described here fills this gap by allowing entry of sufficiently detailed information on experimental design and results for each study, while limiting data entry to the most relevant. ERDB was designed to contain information from the open literature to support dose-response assessment and allow a high level of automation in performance of various types of dose-response analyses. Specifically, ERDB supports emerging analytical approaches for dose-response assessment, while accommodating the diverse nature of published literature. Exposure and response data are accessible in a relational multi-table design, with closely controlled standard fields for recording values and free-text fields to describe unique aspects of the study. Additional comparative analyses are made possible through summary tables and graphic representations of the data contained within ERDB. PMID:18671995

  10. An exposure-response database for detailed toxicity data

    SciTech Connect

    Woodall, George M.

    2008-11-15

    Risk assessment for human health effects often depends on evaluation of toxicological literature from a variety of sources. Risk assessors have limited resources for obtaining raw data, performing follow-on analyses or initiating new studies. These constraints must be balanced against a need to improve scientific credibility through improved statistical and analytical methods that optimize the use of available information. Computerized databases are used in toxicological risk assessment both for storing data and performing predictive analyses. Many systems provide primarily either bibliographic information or summary factual data from toxicological studies; few provide adequate information to allow application of dose-response models. The Exposure-Response database (ERDB) described here fills this gap by allowing entry of sufficiently detailed information on experimental design and results for each study, while limiting data entry to the most relevant. ERDB was designed to contain information from the open literature to support dose-response assessment and allow a high level of automation in performance of various types of dose-response analyses. Specifically, ERDB supports emerging analytical approaches for dose-response assessment, while accommodating the diverse nature of published literature. Exposure and response data are accessible in a relational multi-table design, with closely controlled standard fields for recording values and free-text fields to describe unique aspects of the study. Additional comparative analyses are made possible through summary tables and graphic representations of the data contained within ERDB.

  11. Seismic Waves, 4th order accurate

    2013-08-16

    SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-Dmore » heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finary format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.« less

  12. Seismic Waves, 4th order accurate

    SciTech Connect

    2013-08-16

    SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-D heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finary format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.

  13. Provenance management in Swift with implementation details.

    SciTech Connect

    Gadelha, L. M. R; Clifford, B.; Mattoso, M.; Wilde, M.; Foster, I.

    2011-04-01

    The Swift parallel scripting language allows for the specification, execution and analysis of large-scale computations in parallel and distributed environments. It incorporates a data model for recording and querying provenance information. In this article we describe these capabilities and evaluate interoperability with other systems through the use of the Open Provenance Model. We describe Swift's provenance data model and compare it to the Open Provenance Model. We also describe and evaluate activities performed within the Third Provenance Challenge, which consisted of implementing a specific scientific workflow, capturing and recording provenance information of its execution, performing provenance queries, and exchanging provenance information with other systems. Finally, we propose improvements to both the Open Provenance Model and Swift's provenance system.

  14. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  15. An accurate dynamical electron diffraction algorithm for reflection high-energy electron diffraction

    NASA Astrophysics Data System (ADS)

    Huang, J.; Cai, C. Y.; Lv, C. L.; Zhou, G. W.; Wang, Y. G.

    2015-12-01

    The conventional multislice method (CMS) method, one of the most popular dynamical electron diffraction calculation procedures in transmission electron microscopy, was introduced to calculate reflection high-energy electron diffraction (RHEED) as it is well adapted to deal with the deviations from the periodicity in the direction parallel to the surface. However, in the present work, we show that the CMS method is no longer sufficiently accurate for simulating RHEED with the accelerating voltage 3-100 kV because of the high-energy approximation. An accurate multislice (AMS) method can be an alternative for more accurate RHEED calculations with reasonable computing time. A detailed comparison of the numerical calculation of the AMS method and the CMS method is carried out with respect to different accelerating voltages, surface structure models, Debye-Waller factors and glancing angles.

  16. Effects of Trainer Expressiveness, Seductive Details, and Trainee Goal Orientation on Training Outcomes

    ERIC Educational Resources Information Center

    Towler, Annette

    2009-01-01

    This study focuses on trainer expressiveness and trainee mastery orientation within the context of the seductive details effect. The seductive details effect refers to inclusion of "highly interesting and entertaining information that is only tangentially related to the topic" (Harp & Mayer, 1998, p. 1). One hundred thirty-two participants…

  17. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  18. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  19. Fine Details of the Icy Surface of Ganymede

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Dramatic view of fine details in ice hills and valleys in an unnamed region on Jupiter's moon Ganymede. North is to the top of the picture and the sun illuminates the surface from the left. The finest details that can be discerned in this picture are only 11 meters across (similar to the size of an average house) some 2000 times better than previous images of this region. The bright areas in the left hand version are the sides of hills facing the sun; the dark areas are shadows. In the right hand version the processing has been changed to bring out details in the shadowed regions that are illuminated by the bright hillsides. The brightness of some of the hillsides is so high that the picture elements 'spill over' down the columns of the picture. The image was taken on June 28, 1996 from a distance of about 1000 kilometers. The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC. This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  20. Structural details, pathways, and energetics of unfolding apomyoglobin.

    PubMed

    Onufriev, Alexey; Case, David A; Bashford, Donald

    2003-01-17

    Protein folding is often difficult to characterize experimentally because of the transience of intermediate states, and the complexity of the protein-solvent system. Atomistic simulations, which could provide more detailed information, have had to employ highly simplified models or high temperatures, to cope with the long time scales of unfolding; direct simulation of folding is even more problematic. We report a fully atomistic simulation of the acid-induced unfolding of apomyoglobin in which the protonation of acidic side-chains to simulate low pH is sufficient to induce unfolding at room temperature with no added biasing forces or other unusual conditions; and the trajectory is validated by comparison to experimental characterization of intermediate states. Novel insights provided by their analysis include: characterization of a dry swollen globule state forming a barrier to initial unfolding or final folding; observation of cooperativity in secondary and tertiary structure formation and its explanation in terms of dielectric environments; and structural details of the intermediate and the completely unfolded states. These insights involve time scales and levels of structural detail that are presently beyond the range of experiment, but come within reach through the simulation methods described here. An implicit solvation model is used to analyze the energetics of protein folding at various pH and ionic strength values, and a reasonable estimate of folding free energy is obtained. Electrostatic interactions are found to disfavor folding. PMID:12498802

  1. A Look Inside: MRI Shows the Detail

    ERIC Educational Resources Information Center

    Gosman, Derek; Rose, Mary Annette

    2015-01-01

    Understanding the advantages, risks, and financial costs of medical technology is one way that technologically literate citizens can make better-informed decisions regarding their health and medical care. A cascade of advancements in medical imaging technologies (Ulmer & Jansen 2010) offers an exciting backdrop from which to help students…

  2. Generation and Memory for Contextual Detail

    ERIC Educational Resources Information Center

    Mulligan, Neil W.

    2004-01-01

    Generation enhances item memory but may not enhance other aspects of memory. In 12 experiments, the author investigated the effect of generation on context memory, motivated in part by the hypothesis that generation produces a trade-off in encoding item and contextual information. Participants generated some study words (e.g., hot-___) and read…

  3. Interactive NCORP Map Details Community Research Sites | Division of Cancer Prevention

    Cancer.gov

    An interactive map of the NCI Community Oncology Research Program (NCORP) with detailed information on hundreds of community sites that take part in clinical trials is available on the NCORP website. |

  4. Emplacement of Long Lava Flows: Detailed Topography of the Carrizozo Basalt Lava Flow, New Mexico

    NASA Technical Reports Server (NTRS)

    Zimbelman, J. R; Johnston, A. K.

    2000-01-01

    The Carrizozo flow in south-central New Mexico was examined to obtain detailed topography for a long basaltic lava flow. This information will be helpful in evaluating emplacement models for long lava flows.

  5. Detailed assays conducted on Vietnamese crude oils

    SciTech Connect

    Du, P.Q. )

    1990-07-16

    More oil property data, in the form of recent crude oil assays, have been made available for two Vietnamese crude oils, Bach Ho (White Tiger) and Dai Hung (Big Bear). Crude oil data presented earlier gave limited properties of the crudes,which are from the Miocene formations. Further analyses have been conducted on Bach Ho crude from the Oligocene formations. Production from Oligocene is far more representative of the oils produced from the Bach Ho field and marketed worldwide. Currently, Bach Ho is the only producing field. Dai Hung is expected to be in production during the next few years. Bach Ho is currently producing at the rate of 20,000 b/d. That figure is projected to grow to 100,000 b/d by 1992 and to 120,000 b/d by 1995. Detailed assays of both crude oils are presented.

  6. Most Detailed Image of the Crab Nebula

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This new Hubble image -- one among the largest ever produced with the Earth-orbiting observatory -- shows the most detailed view so far of the entire Crab Nebula ever made. The Crab is arguably the single most interesting object, as well as one of the most studied, in all of astronomy. The image is the largest image ever taken with Hubble's WFPC2 workhorse camera.

    The Crab Nebula is one of the most intricately structured and highly dynamical objects ever observed. The new Hubble image of the Crab was assembled from 24 individual exposures taken with the NASA/ESA Hubble Space Telescope and is the highest resolution image of the entire Crab Nebula ever made.

  7. Detailed mechanism for oxidation of benzene

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1990-01-01

    A detailed mechanism for the oxidation of benzene is presented and used to compute experimentally obtained concentration profiles and ignition delay times over a wide range of equivalence ratio and temperature. The computed results agree qualitatively with all the experimental trends. Quantitative agreement is obtained with several of the composition profiles and for the temperature dependence of the ignition delay times. There are indications, however, that some important reactions are as yet undiscovered in this mechanism. Recent literature expressions have been used for the rate coefficients of most important reactions, except for some involving phenol. The discrepancy between the phenol pyrolysis rate coefficient used in this work and a recent literature expression remains to be explained.

  8. Report Details Solar Radiation Alert and Recommendations

    NASA Astrophysics Data System (ADS)

    Staedter, Tracy

    2006-06-01

    High-energy particles from the Sun and from regions beyond the solar system constantly bombard Earth. Thanks to the planet's atmosphere and magnetic field, comsic radiation is not a significant threat to those rooted on terra firma. But airline crew and passengers flying at high altitudes, or over the poles where the Earth's magnetic field provides no protection, are particularly vulnerable to unpredictable flares on the Sun's surface that launch streams of sub-atomic particles toward Earth. The report, ``Solar Radiation Alert System,'' published by the Federal Aviation Administration (FAA) and the Cooperative Institute for Research in Environmental Sciences (CIRES) at the University of Colorado, Boulder, in July 2005 (www.faa.gov/library/reports/medical/oamtechreports/2000s/media/0514.pdf) details in alert system designed to estimate the ionizing radiation at aircraft flight altitudes and, depending on the resulting dose rate, issue a warning.

  9. Optomechanical details in injection-molded assemblies

    NASA Astrophysics Data System (ADS)

    Hebert, Raymond T.

    1995-12-01

    With the advent of low-cost electro-optic components such as LEDs, laser diodes and CCD imaging devices, the cost and performance demands now fall upon the optical subsystems in order to achieve realistic marketing targets for many emerging commercial and consumer products. One of the many benefits of injection-molded plastic optics is the diversity of features that are available to the design team. Once designed and incorporated into the tooling, many features are virtually free in high-volume production. These features can include mechanical details as well as optical functions. Registration features can be included for precisely positioning optical elements to one another or to other assemblies such as printed circuit boards or housings. Snaps, compression features, spring-loading elements, standoffs, self-tapping screws or ultrasonically weldable features can greatly facilitate ease of assembly.

  10. Picornavirus uncoating intermediate captured in atomic detail

    PubMed Central

    Ren, Jingshan; Wang, Xiangxi; Hu, Zhongyu; Gao, Qiang; Sun, Yao; Li, Xuemei; Porta, Claudine; Walter, Thomas S.; Gilbert, Robert J.; Zhao, Yuguang; Axford, Danny; Williams, Mark; McAuley, Katherine; Rowlands, David J.; Yin, Weidong; Wang, Junzhi; Stuart, David I.; Rao, Zihe; Fry, Elizabeth E.

    2013-01-01

    It remains largely mysterious how the genomes of non-enveloped eukaryotic viruses are transferred across a membrane into the host cell. Picornaviruses are simple models for such viruses, and initiate this uncoating process through particle expansion, which reveals channels through which internal capsid proteins and the viral genome presumably exit the particle, although this has not been clearly seen until now. Here we present the atomic structure of an uncoating intermediate for the major human picornavirus pathogen CAV16, which reveals VP1 partly extruded from the capsid, poised to embed in the host membrane. Together with previous low-resolution results, we are able to propose a detailed hypothesis for the ordered egress of the internal proteins, using two distinct sets of channels through the capsid, and suggest a structural link to the condensed RNA within the particle, which may be involved in triggering RNA release. PMID:23728514

  11. Capture barrier distributions: Some insights and details

    SciTech Connect

    Rowley, N.; Grar, N.; Trotta, M.

    2007-10-15

    The 'experimental barrier distribution' provides a parameter-free representation of experimental heavy-ion capture cross sections that highlights the effects of entrance-channel couplings. Its relation to the s-wave transmission is discussed, and in particular it is shown how the full capture cross section can be generated from an l=0 coupled-channels calculation. Furthermore, it is shown how this transmission can be simply exploited in calculations of quasifission and evaporation-residue cross sections. The system {sup 48}Ca+{sup 154}Sm is studied in detail. A calculation of the compound-nucleus spin distribution reveals a possible energy dependence of barrier weights due to polarization arising from target and projectile quadrupole phonon states; this effect also gives rise to an entrance-channel 'extra-push'.

  12. Hubble Captures Detailed Image of Uranus' Atmosphere

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Hubble Space Telescope has peered deep into Uranus' atmosphere to see clear and hazy layers created by a mixture of gases. Using infrared filters, Hubble captured detailed features of three layers of Uranus' atmosphere.

    Hubble's images are different from the ones taken by the Voyager 2 spacecraft, which flew by Uranus 10 years ago. Those images - not taken in infrared light - showed a greenish-blue disk with very little detail.

    The infrared image allows astronomers to probe the structure of Uranus' atmosphere, which consists of mostly hydrogen with traces of methane. The red around the planet's edge represents a very thin haze at a high altitude. The haze is so thin that it can only be seen by looking at the edges of the disk, and is similar to looking at the edge of a soap bubble. The yellow near the bottom of Uranus is another hazy layer. The deepest layer, the blue near the top of Uranus, shows a clearer atmosphere.

    Image processing has been used to brighten the rings around Uranus so that astronomers can study their structure. In reality, the rings are as dark as black lava or charcoal.

    This false color picture was assembled from several exposures taken July 3, 1995 by the Wide Field Planetary Camera-2.

    The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.

    This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/

  13. Detailed evaluation of the analytical resolution function

    NASA Astrophysics Data System (ADS)

    Wittmaack, K.

    2003-01-01

    The analytical resolution or response function (ARF) suggested by Dowsett et al. for describing measured secondary ion mass spectrometry (SIMS) depth profiles of delta doping distributions in solids was analysed with the aim of identifying the relevance and the physical meaning of the upslope length λu and the Gaussian broadening parameter σ. It was found that it is difficult to determine the upslope length safely as long as λu/ σ<0.3. For an accurate determination of λu it will usually be necessary to measure the profile of (ideal) delta markers over four orders of magnitude or more. Measured delta profiles with very sharp leading edges as well as delta profiles calculated on the basis of the diffusion approximation of atomic mixing were compared with the ARF. Irrespective of the true value of λu, the peak height of the ARF was found to be too high by up to 12% and the width too small. The results suggest directions for improving the ARF.

  14. Retinal connectomics: towards complete, accurate networks.

    PubMed

    Marc, Robert E; Jones, Bryan W; Watt, Carl B; Anderson, James R; Sigulinsky, Crystal; Lauritzen, Scott

    2013-11-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 10(12)-10(15) byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies of complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  15. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  16. Expert systems should be more accurate than human experts - Evaluation procedures from human judgment and decisionmaking

    NASA Technical Reports Server (NTRS)

    Levi, Keith

    1989-01-01

    Two procedures for the evaluation of the performance of expert systems are illustrated: one procedure evaluates predictive accuracy; the other procedure is complementary in that it uncovers the factors that contribute to predictive accuracy. Using these procedures, it is argued that expert systems should be more accurate than human experts in two senses. One sense is that expert systems must be more accurate to be cost-effective. Previous research is reviewed and original results are presented which show that simple statistical models typically perform better than human experts for the task of combining evidence from a given set of information sources. The results also suggest the second sense in which expert systems should be more accurate than human experts. They reveal that expert systems should share factors that contribute to human accuracy, but not factors that detract from human accuracy. Thus the thesis is that one should both require and expect systems to be more accurate than humans.

  17. Dialing Up Telecommunications Information.

    ERIC Educational Resources Information Center

    Bates, Mary Ellen

    1993-01-01

    Describes how to find accurate, current information about telecommunications industries, products and services, rates and tariffs, and regulatory information using electronic information resources available from the private and public sectors. A sidebar article provides contact information for producers and service providers. (KRN)

  18. Accurate, fully-automated NMR spectral profiling for metabolomics.

    PubMed

    Ravanbakhsh, Siamak; Liu, Philip; Bjorndahl, Trent C; Bjordahl, Trent C; Mandal, Rupasri; Grant, Jason R; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S

    2015-01-01

    Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of NMR in

  19. A data-management system for detailed areal interpretive data

    USGS Publications Warehouse

    Ferrigno, C.F.

    1986-01-01

    A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)

  20. Towards a detailed soot model for internal combustion engines

    SciTech Connect

    Mosbach, Sebastian; Celnik, Matthew S.; Raj, Abhijeet; Kraft, Markus; Zhang, Hongzhi R.; Kubo, Shuichi; Kim, Kyoung-Oh

    2009-06-15

    In this work, we present a detailed model for the formation of soot in internal combustion engines describing not only bulk quantities such as soot mass, number density, volume fraction, and surface area but also the morphology and chemical composition of soot aggregates. The new model is based on the Stochastic Reactor Model (SRM) engine code, which uses detailed chemistry and takes into account convective heat transfer and turbulent mixing, and the soot formation is accounted for by SWEEP, a population balance solver based on a Monte Carlo method. In order to couple the gas-phase to the particulate phase, a detailed chemical kinetic mechanism describing the combustion of Primary Reference Fuels (PRFs) is extended to include small Polycyclic Aromatic Hydrocarbons (PAHs) such as pyrene, which function as soot precursor species for particle inception in the soot model. Apart from providing averaged quantities as functions of crank angle like soot mass, volume fraction, aggregate diameter, and the number of primary particles per aggregate for example, the integrated model also gives detailed information such as aggregate and primary particle size distribution functions. In addition, specifics about aggregate structure and composition, including C/H ratio and PAH ring count distributions, and images similar to those produced with Transmission Electron Microscopes (TEMs), can be obtained. The new model is applied to simulate an n-heptane fuelled Homogeneous Charge Compression Ignition (HCCI) engine which is operated at an equivalence ratio of 1.93. In-cylinder pressure and heat release predictions show satisfactory agreement with measurements. Furthermore, simulated aggregate size distributions as well as their time evolution are found to qualitatively agree with those obtained experimentally through snatch sampling. It is also observed both in the experiment as well as in the simulation that aggregates in the trapped residual gases play a vital role in the soot

  1. Onboard Autonomous Corrections for Accurate IRF Pointing.

    NASA Astrophysics Data System (ADS)

    Jorgensen, J. L.; Betto, M.; Denver, T.

    2002-05-01

    Over the past decade, the Noise Equivalent Angle (NEA) of onboard attitude reference instruments, has decreased from tens-of-arcseconds to the sub-arcsecond level. This improved performance is partly due to improved sensor-technology with enhanced signal to noise ratios, partly due to improved processing electronics which allows for more sophisticated and faster signal processing. However, the main reason for the increased precision, is the application of onboard autonomy, which apart from simple outlier rejection also allows for removal of "false positive" answers, and other "unexpected" noise sources, that otherwise would degrade the quality of the measurements (e.g. discrimination between signals caused by starlight and ionizing radiation). The utilization of autonomous signal processing has also provided the means for another onboard processing step, namely the autonomous recovery from lost in space, where the attitude instrument without a priori knowledge derive the absolute attitude, i.e. in IRF coordinates, within fractions of a second. Combined with precise orbital state or position data, the absolute attitude information opens for multiple ways to improve the mission performance, either by reducing operations costs, by increasing pointing accuracy, by reducing mission expendables, or by providing backup decision information in case of anomalies. The Advanced Stellar Compass's (ASC) is a miniature, high accuracy, attitude instrument which features fully autonomous operations. The autonomy encompass all direct steps from automatic health checkout at power-on, over fully automatic SEU and SEL handling and proton induced sparkle removal, to recovery from "lost in space", and optical disturbance detection and handling. But apart from these more obvious autonomy functions, the ASC also features functions to handle and remove the aforementioned residuals. These functions encompass diverse operators such as a full orbital state vector model with automatic cloud

  2. Accurate, low-cost 3D-models of gullies

    NASA Astrophysics Data System (ADS)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  3. The Devil Is in the Details

    PubMed Central

    Jenkins, Ian; Vinetz, Joseph

    2009-01-01

    The approach to clinical conundrums by an expert clinician is revealed through presentation of an actual patient's case in an approach typical of morning report. Similar to patient care, sequential pieces of information are provided to the clinician who is unfamiliar with the case. The focus is on the thought processes of both the clinical team caring for the patient and the discussant. PMID:19670363

  4. Description of Axial Detail for ROK Fuel

    SciTech Connect

    Trellue, Holly R; Galloway, Jack D

    2012-04-20

    For the purpose of NDA simulations of the ROK fuel assemblies, we have developed an axial burnup distribution to represent the pins themselves based on gamma scans of rods in the G23 assembly. For the purpose of modeling the G23 assembly (both at ORNL and LANL), the pin-by-pin burnup map as simulated by ROK is being assumed to represent the radial burnup distribution. However, both DA and NDA results indicate that this simulated estimate is not 100% correct. In particular, the burnup obtained from the axial gamma scan of 7 pins does not represent exactly the same 'average' pin burnup as the ROK simulation. Correction for this discrepancy is a goal of the well-characterized assembly task but will take time. For now, I have come up with a correlation for 26 axial points of the burnup as obtained by gamma scans of 7 different rods (C13, G01, G02, J11, K10, L02, and M04, neglecting K02 at this time) to the average burnup given by the simulation for each of the rods individually. The resulting fraction in each axial zone is then averaged for the 7 different rods so that it can represent every fuel pin in the assembly. The burnup in each of the 26 axial zones of rods in all ROK assemblies will then be directly adjusted using this fraction, which is given in Table 1. Note that the gamma scan data given by ROK for assembly G23 included a length of {approx}3686 mm, so the first 12 mm and the last 14 mm were ignored to give an actual rod length of {approx}366 cm. To represent assembly F02 in which no pin-by-pin burnup distribution is given by ROK, we must model it using infinitely-reflected geometry but can look at the effects of measuring in different axial zones by using intermediate burnup files (i.e. smaller burnups than 28 GWd/MTU) and determining which axial zone(s) each burnup represents. Details for assembly F02 are then given in Tables 2 and 3, which is given in Table 1 and has 44 total axial zones to represent the top meter in explicit detail in addition to the

  5. Are National HFC Inventory Reports Accurate?

    NASA Astrophysics Data System (ADS)

    Lunt, M. F.; Rigby, M. L.; Ganesan, A.; Manning, A.; O'Doherty, S.; Prinn, R. G.; Saito, T.; Harth, C. M.; Muhle, J.; Weiss, R. F.; Salameh, P.; Arnold, T.; Yokouchi, Y.; Krummel, P. B.; Steele, P.; Fraser, P. J.; Li, S.; Park, S.; Kim, J.; Reimann, S.; Vollmer, M. K.; Lunder, C. R.; Hermansen, O.; Schmidbauer, N.; Young, D.; Simmonds, P. G.

    2014-12-01

    Hydrofluorocarbons (HFCs) were introduced as replacements for ozone depleting chlorinated gases due to their negligible ozone depletion potential. As a result, these potent greenhouse gases are now rapidly increasing in atmospheric mole fraction. However, at present, less than 50% of HFC emissions, as inferred from models combined with atmospheric measurements (top-down methods), can be accounted for by the annual national reports to the United Nations Framework Convention on Climate Change (UNFCCC). There are at least two possible reasons for the discrepancy. Firstly, significant emissions could be originating from countries not required to report to the UNFCCC ("non-Annex 1" countries). Secondly, emissions reports themselves may be subject to inaccuracies. For example the HFC emission factors used in the 'bottom-up' calculation of emissions tend to be technology-specific (refrigeration, air conditioning etc.), but not tuned to the properties of individual HFCs. To provide a new top-down perspective, we inferred emissions using high frequency HFC measurements from the Advanced Global Atmospheric Gases Experiment (AGAGE) and the National Institute for Environmental Studies (NIES) networks. Global and regional emissions information was inferred from these measurements using a coupled Eulerian and Lagrangian system, based on NCAR's MOZART model and the UK Met Office NAME model. Uncertainties in this measurement and modelling framework were investigated using a hierarchical Bayesian inverse method. Global and regional emissions estimates for five of the major HFCs (HFC-134a, HFC-125, HFC-143a, HFC-32, HFC-152a) from 2004-2012 are presented. It was found that, when aggregated, the top-down estimates from Annex 1 countries agreed remarkably well with the reported emissions, suggesting the non-Annex 1 emissions make up the difference with the top-down global estimate. However, when these HFC species are viewed individually we find that emissions of HFC-134a are over

  6. How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?

    SciTech Connect

    Jones, A. Kyle; Ensor, Joe E.; Pasciak, Alexander S.

    2014-07-15

    Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone

  7. The effect of post-identification feedback, delay, and suspicion on accurate eyewitnesses.

    PubMed

    Quinlivan, Deah S; Neuschatz, Jeffrey S; Douglass, Amy Bradfield; Wells, Gary L; Wetmore, Stacy A

    2012-06-01

    We examined whether post-identification feedback and suspicion affect accurate eyewitnesses. Participants viewed a video event and then made a lineup decision from a target-present photo lineup. Regardless of accuracy, the experimenter either, informed participants that they made a correct lineup decision or gave no information regarding their lineup decision. Immediately following the lineup decision or after a 1-week delay, a second experimenter gave some of the participants who received confirming feedback reason to be suspicious of the confirming feedback. Following immediately after the confirming feedback, accurate witnesses did not demonstrate certainty inflation. However, after a delay accurate witnesses did demonstrate certainty inflation typically associated with confirming feedback. The suspicion manipulation only affected participants' certainty when the confirming feedback created certainty inflation. The results lend support to the accessibility interpretation of the post-identification feedback effect and the erasure interpretation of the suspicion effect. PMID:22667810

  8. Measuring accurate body parameters of dressed humans with large-scale motion using a Kinect sensor.

    PubMed

    Xu, Huanghao; Yu, Yao; Zhou, Yu; Li, Yang; Du, Sidan

    2013-01-01

    Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods. PMID:24064597

  9. Detailed surveys of offshore Peru margin

    SciTech Connect

    Hussong, D.M.; Taylor, B.; Kulm, L.D.; Hilde, T.W.C.

    1986-07-01

    The complex and highly variable structure of the submarine continental margin of central Peru is revealed by geophysical surveys and geologic sampling completed in 1985. The surveys were conducted in preparation for deep scientific drilling to be undertaken by the Ocean Drilling Program in November-December 1986. More than 11,000 km/sup 2/ of sea floor were mapped using the SeaMARC II side-scan sonar and bathymetry system; 1500 km of multichannel seismic reflection profiles and 4000 km of single-channel seismic data were shot; and many coring, dredging, and heat-flow stations were obtained. The data permit construction of detailed three-dimensional geologic maps of the region. These maps show that the ancient metamorphic rocks of South America extend close to the trench axis and apparently have undergone a history of truncation and subsidence related to the subduction of the Nazca oceanic plate. Adjacent segments of the Peru forearc have dramatically different structure and appear to have had differing tectonic histories. The margin is disrupted by extensive (primarily tensional) faulting; the larger faults extend perpendicular to the strike of the trench and often serve as conduits for diapirs and mud volcanoes. Living chemosynthetic clams were dredged from 3800-m depth along one of these fault trends, suggesting that active fluid venting occurs at depth on the continental wall of the Peru Trench.

  10. Inverse sequential simulation: Performance and implementation details

    NASA Astrophysics Data System (ADS)

    Xu, Teng; Gómez-Hernández, J. Jaime

    2015-12-01

    For good groundwater flow and solute transport numerical modeling, it is important to characterize the formation properties. In this paper, we analyze the performance and important implementation details of a new approach for stochastic inverse modeling called inverse sequential simulation (iSS). This approach is capable of characterizing conductivity fields with heterogeneity patterns difficult to capture by standard multiGaussian-based inverse approaches. The method is based on the multivariate sequential simulation principle, but the covariances and cross-covariances used to compute the local conditional probability distributions are computed by simple co-kriging which are derived from an ensemble of conductivity and piezometric head fields, in a similar manner as the experimental covariances are computed in an ensemble Kalman filtering. A sensitivity analysis is performed on a synthetic aquifer regarding the number of members of the ensemble of realizations, the number of conditioning data, the number of piezometers at which piezometric heads are observed, and the number of nodes retained within the search neighborhood at the moment of computing the local conditional probabilities. The results show the importance of having a sufficiently large number of all of the mentioned parameters for the algorithm to characterize properly hydraulic conductivity fields with clear non-multiGaussian features.

  11. Detailed Chromospheric Activity Nature of KIC 9641031

    NASA Astrophysics Data System (ADS)

    Yoldaş, Ezgi; Dal, Hasan Ali

    2016-04-01

    This study depends on KIC 9641031 eclipsing binary with a chromospherically active component. There are three type variations, such as geometrical variations due to eclipses, sinusoidal variations due to the rotational modulations, and also flares, in the light curves. Taking into account results obtained from observations in the Kepler Mission Database, we discuss the details of chromospheric activity. The sinusoidal light variations due to rotational modulation and the flare events were modelled. 92 different data subsets separated using the analytic models were modelled separately to obtain the cool spot configuration. Acording to the model, there are two active regions separated by about 180° longitudinally between the latitudes of +50° and +100°. 240 flares, whose parameters were computed, were detected. Using these parameters, the OPEA model was derived, in which the Plateau value was found to be 1.232±0.069 s, and half-life parameter was found as 2291.7 s. The flare frequency N1 was found as 0.41632 h-1, while the flare frequency N2 was found as 0.00027. Considering these parameters together with the orbital period variations demonstrates that the period variations depend on chromospheric activity. Comparing the system with its analogue, the activity level of KIC 9641031 is remarkably lower than the others.

  12. Some articulatory details of emotional speech

    NASA Astrophysics Data System (ADS)

    Lee, Sungbok; Yildirim, Serdar; Bulut, Murtaza; Kazemzadeh, Abe; Narayanan, Shrikanth

    2005-09-01

    Differences in speech articulation among four emotion types, neutral, anger, sadness, and happiness are investigated by analyzing tongue tip, jaw, and lip movement data collected from one male and one female speaker of American English. The data were collected using an electromagnetic articulography (EMA) system while subjects produce simulated emotional speech. Pitch, root-mean-square (rms) energy and the first three formants were estimated for vowel segments. For both speakers, angry speech exhibited the largest rms energy and largest articulatory activity in terms of displacement range and movement speed. Happy speech is characterized by largest pitch variability. It has higher rms energy than neutral speech but articulatory activity is rather comparable to, or less than, neutral speech. That is, happy speech is more prominent in voicing activity than in articulation. Sad speech exhibits longest sentence duration and lower rms energy. However, its articulatory activity is no less than neutral speech. Interestingly, for the male speaker, articulation for vowels in sad speech is consistently more peripheral (i.e., more forwarded displacements) when compared to other emotions. However, this does not hold for female subject. These and other results will be discussed in detail with associated acoustics and perceived emotional qualities. [Work supported by NIH.

  13. Detailed Chemical Kinetic Modeling of Cyclohexane Oxidation

    SciTech Connect

    Silke, E J; Pitz, W J; Westbrook, C K; Ribaucour, M

    2006-11-10

    A detailed chemical kinetic mechanism has been developed and used to study the oxidation of cyclohexane at both low and high temperatures. Reaction rate constant rules are developed for the low temperature combustion of cyclohexane. These rules can be used for in chemical kinetic mechanisms for other cycloalkanes. Since cyclohexane produces only one type of cyclohexyl radical, much of the low temperature chemistry of cyclohexane is described in terms of one potential energy diagram showing the reaction of cyclohexyl radical + O{sub 2} through five, six and seven membered ring transition states. The direct elimination of cyclohexene and HO{sub 2} from RO{sub 2} is included in the treatment using a modified rate constant of Cavallotti et al. Published and unpublished data from the Lille rapid compression machine, as well as jet-stirred reactor data are used to validate the mechanism. The effect of heat loss is included in the simulations, an improvement on previous studies on cyclohexane. Calculations indicated that the production of 1,2-epoxycyclohexane observed in the experiments can not be simulated based on the current understanding of low temperature chemistry. Possible 'alternative' H-atom isomerizations leading to different products from the parent O{sub 2}QOOH radical were included in the low temperature chemical kinetic mechanism and were found to play a significant role.

  14. Details of tetrahedral anisotropic mesh adaptation

    NASA Astrophysics Data System (ADS)

    Jensen, Kristian Ejlebjerg; Gorman, Gerard

    2016-04-01

    We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.

  15. Detailed Uncertainty Analysis of the ZEM-3 Measurement System

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.

  16. Detailed map of a cis-regulatory input function

    NASA Astrophysics Data System (ADS)

    Setty, Y.; Mayo, A. E.; Surette, M. G.; Alon, U.

    2003-06-01

    Most genes are regulated by multiple transcription factors that bind specific sites in DNA regulatory regions. These cis-regulatory regions perform a computation: the rate of transcription is a function of the active concentrations of each of the input transcription factors. Here, we used accurate gene expression measurements from living cell cultures, bearing GFP reporters, to map in detail the input function of the classic lacZYA operon of Escherichia coli, as a function of about a hundred combinations of its two inducers, cAMP and isopropyl -D-thiogalactoside (IPTG). We found an unexpectedly intricate function with four plateau levels and four thresholds. This result compares well with a mathematical model of the binding of the regulatory proteins cAMP receptor protein (CRP) and LacI to the lac regulatory region. The model is also used to demonstrate that with few mutations, the same region could encode much purer AND-like or even OR-like functions. This possibility means that the wild-type region is selected to perform an elaborate computation in setting the transcription rate. The present approach can be generally used to map the input functions of other genes.

  17. Urban scale air quality modelling using detailed traffic emissions estimates

    NASA Astrophysics Data System (ADS)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  18. Detailed Evaluation of MODIS Fire Radiative Power Measurements

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles

    2010-01-01

    Satellite remote sensing is providing us tremendous opportunities to measure the fire radiative energy (FRE) release rate or power (FRP) from open biomass burning, which affects many vegetated regions of the world on a seasonal basis. Knowledge of the biomass burning characteristics and emission source strengths of different (particulate and gaseous) smoke constituents is one of the principal ingredients upon which the assessment, modeling, and forecasting of their distribution and impacts depend. This knowledge can be gained through accurate measurement of FRP, which has been shown to have a direct relationship with the rates of biomass consumption and emissions of major smoke constituents. Over the last decade or so, FRP has been routinely measured from space by both the MODIS sensors aboard the polar orbiting Terra and Aqua satellites, and the SEVIRI sensor aboard the Meteosat Second Generation (MSG) geostationary satellite. During the last few years, FRP has been gaining recognition as an important parameter for facilitating the development of various scientific studies relating to the quantitative characterization of biomass burning and their emissions. Therefore, we are conducting a detailed analysis of the FRP products from MODIS to characterize the uncertainties associated with them, such as those due to the MODIS bow-tie effects and other factors, in order to establish their error budget for use in scientific research and applications. In this presentation, we will show preliminary results of the MODIS FRP data analysis, including comparisons with airborne measurements.

  19. Light in Tropical Forest Models: What Detail Matters?

    NASA Astrophysics Data System (ADS)

    Shenkin, A.; Bentley, L. P.; Asner, G. P.; Malhi, Y.

    2014-12-01

    Representations of light in models of tropical forests are typically unconstrained by field data and rife with assumptions, and for good reason: forest light environments are highly variable, difficult and onerous to predict, and the value of improved prediction is unclear. Still, the question remains: how detailed must our models be to be accurate enough, yet simple enough to be able to scale them from plots to landscapes? Here we use field data to constrain 1-D, 2-D, and 3-D light models and integrate them with simple forest models to predict net primary production (NPP) across an Andes-to-Amazon elevation transect in Peru. Field data consist of novel vertical light profile measurements coupled with airborne LiDAR (light detection and ranging) data from the Carnegie Airborne Observatory. Preliminary results indicate that while 1-D models may be "good-enough" and highly-scalable where forest structure is relatively homogenous, more complex models become important as forest structure becomes more heterogeneous. We discuss the implications our results hold for prediction of NPP under a changing climate, and suggest paths forward for useful proxies of light availability in forests to improve and scale up forest models.

  20. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    SciTech Connect

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss

  1. Accurate patient dosimetry of kilovoltage cone-beam CT in radiation therapy

    SciTech Connect

    Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.

    2008-03-15

    The increased utilization of x-ray imaging in image-guided radiotherapy has dramatically improved the radiation treatment and the lives of cancer patients. Daily imaging procedures, such as cone-beam computed tomography (CBCT), for patient setup may significantly increase the dose to the patient's normal tissues. This study investigates the dosimetry from a kilovoltage (kV) CBCT for real patient geometries. Monte Carlo simulations were used to study the kV beams from a Varian on-board imager integrated into the Trilogy accelerator. The Monte Carlo calculated results were benchmarked against measurements and good agreement was obtained. The authors developed a novel method to calibrate Monte Carlo simulated beams with measurements using an ionization chamber in which the air-kerma calibration factors are obtained from an Accredited Dosimetry Calibration Laboratory. The authors have introduced a new Monte Carlo calibration factor, f{sub MCcal}, which is determined from the calibration procedure. The accuracy of the new method was validated by experiment. When a Monte Carlo simulated beam has been calibrated, the simulated beam can be used to accurately predict absolute dose distributions in the irradiated media. Using this method the authors calculated dose distributions to patient anatomies from a typical CBCT acquisition for different treatment sites, such as head and neck, lung, and pelvis. Their results have shown that, from a typical head and neck CBCT, doses to soft tissues, such as eye, spinal cord, and brain can be up to 8, 6, and 5 cGy, respectively. The dose to the bone, due to the photoelectric effect, can be as much as 25 cGy, about three times the dose to the soft tissue. The study provides detailed information on the additional doses to the normal tissues of a patient from a typical kV CBCT acquisition. The methodology of the Monte Carlo beam calibration developed and introduced in this study allows the user to calculate both relative and absolute

  2. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  3. Detailed transcriptome atlas of the pancreatic beta cell

    PubMed Central

    Kutlu, Burak; Burdick, David; Baxter, David; Rasschaert, Joanne; Flamez, Daisy; Eizirik, Decio L; Welsh, Nils; Goodman, Nathan; Hood, Leroy

    2009-01-01

    Background Gene expression patterns provide a detailed view of cellular functions. Comparison of profiles in disease vs normal conditions provides insights into the processes underlying disease progression. However, availability and integration of public gene expression datasets remains a major challenge. The aim of the present study was to explore the transcriptome of pancreatic islets and, based on this information, to prepare a comprehensive and open access inventory of insulin-producing beta cell gene expression, the Beta Cell Gene Atlas (BCGA). Methods We performed Massively Parallel Signature Sequencing (MPSS) analysis of human pancreatic islet samples and microarray analyses of purified rat beta cells, alpha cells and INS-1 cells, and compared the information with available array data in the literature. Results MPSS analysis detected around 7600 mRNA transcripts, of which around a third were of low abundance. We identified 2000 and 1400 transcripts that are enriched/depleted in beta cells compared to alpha cells and INS-1 cells, respectively. Microarray analysis identified around 200 transcription factors that are differentially expressed in either beta or alpha cells. We reanalyzed publicly available gene expression data and integrated these results with the new data from this study to build the BCGA. The BCGA contains basal (untreated conditions) gene expression level estimates in beta cells as well as in different cell types in human, rat and mouse pancreas. Hierarchical clustering of expression profile estimates classify cell types based on species while beta cells were clustered together. Conclusion Our gene atlas is a valuable source for detailed information on the gene expression distribution in beta cells and pancreatic islets along with insulin producing cell lines. The BCGA tool, as well as the data and code used to generate the Atlas are available at the T1Dbase website (T1DBase.org). PMID:19146692

  4. Accurate single-molecule FRET studies using multiparameter fluorescence detection.

    PubMed

    Sisamakis, Evangelos; Valeri, Alessandro; Kalinin, Stanislav; Rothwell, Paul J; Seidel, Claus A M

    2010-01-01

    In the recent decade, single-molecule (sm) spectroscopy has come of age and is providing important insight into how biological molecules function. So far our view of protein function is formed, to a significant extent, by traditional structure determination showing many beautiful static protein structures. Recent experiments by single-molecule and other techniques have questioned the idea that proteins and other biomolecules are static structures. In particular, Förster resonance energy transfer (FRET) studies of single molecules have shown that biomolecules may adopt many conformations as they perform their function. Despite the success of sm-studies, interpretation of smFRET data are challenging since they can be complicated due to many artifacts arising from the complex photophysical behavior of fluorophores, dynamics, and motion of fluorophores, as well as from small amounts of contaminants. We demonstrate that the simultaneous acquisition of a maximum of fluorescence parameters by multiparameter fluorescence detection (MFD) allows for a robust assessment of all possible artifacts arising from smFRET and offers unsurpassed capabilities regarding the identification and analysis of individual species present in a population of molecules. After a short introduction, the data analysis procedure is described in detail together with some experimental considerations. The merits of MFD are highlighted further with the presentation of some applications to proteins and nucleic acids, including accurate structure determination based on FRET. A toolbox is introduced in order to demonstrate how complications originating from orientation, mobility, and position of fluorophores have to be taken into account when determining FRET-related distances with high accuracy. Furthermore, the broad time resolution (picoseconds to hours) of MFD allows for kinetic studies that resolve interconversion events between various subpopulations as a biomolecule of interest explores its

  5. Can College Students Accurately Assess What Affects Their Learning and Development?

    ERIC Educational Resources Information Center

    Bowman, Nicholas A.; Seifert, Tricia A.

    2011-01-01

    Informal (and sometimes formal) assessments in higher education often ask students how their skills or attitudes have changed as the result of engaging in a particular course or program; however, it is unclear to what extent these self-reports are accurate. Using a longitudinal sample of over 3,000 college students, we found that students were…

  6. Spurious Consensus and Opinion Revision: Why Might People Be More Confident in Their Less Accurate Judgments?

    ERIC Educational Resources Information Center

    Yaniv, Ilan; Choshen-Hillel, Shoham; Milyavsky, Maxim

    2009-01-01

    In the interest of improving their decision making, individuals revise their opinions on the basis of samples of opinions obtained from others. However, such a revision process may lead decision makers to experience greater confidence in their less accurate judgments. The authors theorize that people tend to underestimate the informative value of…

  7. MAGNIFICENT DETAILS IN A DUSTY SPIRAL GALAXY

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1995, the majestic spiral galaxy NGC 4414 was imaged by the Hubble Space Telescope as part of the HST Key Project on the Extragalactic Distance Scale. An international team of astronomers, led by Dr. Wendy Freedman of the Observatories of the Carnegie Institution of Washington, observed this galaxy on 13 different occasions over the course of two months. Images were obtained with Hubble's Wide Field Planetary Camera 2 (WFPC2) through three different color filters. Based on their discovery and careful brightness measurements of variable stars in NGC 4414, the Key Project astronomers were able to make an accurate determination of the distance to the galaxy. The resulting distance to NGC 4414, 19.1 megaparsecs or about 60 million light-years, along with similarly determined distances to other nearby galaxies, contributes to astronomers' overall knowledge of the rate of expansion of the universe. The Hubble constant (H0) is the ratio of how fast galaxies are moving away from us to their distance from us. This astronomical value is used to determine distances, sizes, and the intrinsic luminosities for many objects in our universe, and the age of the universe itself. Due to the large size of the galaxy compared to the WFPC2 detectors, only half of the galaxy observed was visible in the datasets collected by the Key Project astronomers in 1995. In 1999, the Hubble Heritage Team revisited NGC 4414 and completed its portrait by observing the other half with the same filters as were used in 1995. The end result is a stunning full-color look at the entire dusty spiral galaxy. The new Hubble picture shows that the central regions of this galaxy, as is typical of most spirals, contain primarily older, yellow and red stars. The outer spiral arms are considerably bluer due to ongoing formation of young, blue stars, the brightest of which can be seen individually at the high resolution provided by the Hubble camera. The arms are also very rich in clouds of interstellar dust

  8. Need for "counter-detailing" antibiotics.

    PubMed

    Hendeles, L

    1976-09-01

    Selected antibiotic advertisements in medical journals are discussed to illustrate the misleading information that is often disseminated to physicians by the pharmaceutical industry. Laboratory and clinical data are presented to question the validity of selected advertisements which (1) encourage the use of Keflex for severe respiratory infections in children, (2) recommend the use of Keflex for the treatment of bacterial bronchitis, (3) suggest that high tissue penetration is a unique property of Vibramycin, (4) present pooled susceptability data which do not reflect microbial resistance patterns in the patient's hospital, (5) recommend twice-daily administration of Ancef for urinary tract infections but do not clearly state the potential danger of this regimen for other infections, (6) suggest that gentamicin should be given to adults in only two dosage sizes for the treatment of serious Gram-negative infections, and (7) lead the reader to assume that only women need to be treated for Trichomonas infections. It is suggested that as antibiotics are marketed, hospital therapeutics committees should evaluate their advantages and permit formulary additions for only those agents demonstrating increased efficacy, decreased toxicity or decreased cost. Pharmacists who monitor drug therapy can provide information to the physician which will increase his awareness of optimal antibiotic therapy. PMID:1086598

  9. Ancillary-service details: Dynamic scheduling

    SciTech Connect

    Hirst, E.; Kirby, B.

    1997-01-01

    Dynamic scheduling (DS) is the electronic transfer from one control area to another of the time-varying electricity consumption associated with a load or the time-varying electricity production associated with a generator. Although electric utilities have been using this technique for at least two decades, its use is growing in popularity and importance. This growth is a consequence of the major changes under way in US bulk-power markets, in particular efforts to unbundle generation from transmission and to increase competition among generation providers. DS can promote competition and increase choices. It allows consumers to purchase certain services from entities outside their physical-host area and it allows generators to sell certain services to entities other than their physical host. These services include regulation (following minute-to-minute variations in load) and operating reserves, among others. Such an increase in the number of possible suppliers and customers should encourage innovation and reduce the costs and prices of providing electricity services. The purpose of the project reported here was to collect and analyze data on utility experiences with DS. Chapter 2 provides additional details and examples of the definitions of DS. Chapter 3 explains why DS might be an attractive service that customers and generators, as well as transmission providers, might wan to use. Chapter 4 presents some of the many current DS examples the authors uncovered in their interviews. Chapter 5 discusses the costs and cost-effectiveness of DS. Chapter 6 explains what they believe can and cannot be electronically moved from one control area to another, primarily in terms of the six ancillary services that FERC defined in Order 888. Chapter 7 discusses the need for additional research on DS.

  10. Rhinoscleroma: a detailed histopathological diagnostic insight

    PubMed Central

    Ahmed, Ahmed RH; El-badawy, Zeinab H; Mohamed, Ibrahim R; Abdelhameed, Waleed AM

    2015-01-01

    Rhinoscleroma (RS) is a chronic specific disease of nose and upper respiratory passages caused by Klebsiella rhinoscleromatis bacilli. It is endemic in Egypt and in sporadic areas worldwide. Diagnosis of RS depends on identification of the pathognomonic Mickulicz cells (MCs) which is most prominent during granulomatous phase but spares or absent during catarrhal or sclerotic phases of the disease. This study aimed to identify the potential diagnostic features of nasal RS when MCs are absent. Nasal biopsies from 125 patients complaining of chronic nasal symptoms were retrieved for this study; including 72 chronic non specific inflammatory lesions and 53 RS diagnosed by PAS and Geimsa stains. The detailed histological differences among the two groups were measured statistically. RS was frequently a bilateral disease (P < 0.05) of young age (P < 0.001) with a female predominance (P < 0.05) and usually associated with nasal crustations (P < 0.001). Five strong histological indicators of RS were specified by univariate binary logistic regression analyses including squamous metaplasia (OR 27.2, P < 0.0001), dominance of plasma cells (OR 12.75, P < 0.0001), Russell bodies (OR 8.83, P < 0.0001), neutrophiles (OR 3.7, P < 0.001) and absence of oesinophiles (OR 12.0, P < 0.0001). According to Multivariate analysis, the diagnostic features of RS in absence of MCs can be classified into major criteria including dominance of plasma cells infiltration and absence of oesinophiles and minor criteria including young age, female gender, bilateral nasal involvement, nasal crustation, squamous metaplasia, Russell bodies, and neutrophiles. The diagnostic model using the two major criteria confirmed or excluded RS in 84.3% of the investigated cases. PMID:26339415

  11. Chord, Horizontal Tie Bar & Crossbracing Joint Details; Crossbracing Center ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Chord, Horizontal Tie Bar & Crossbracing Joint Details; Crossbracing Center Joint Detail; Chord, Panel Posts, Braces & Counterbrace Joint Detail - Brownsville Covered Bridge, Spanning East Fork Whitewater River (moved to Eagle Creek Park, Indianapolis), Brownsville, Union County, IN

  12. 5 CFR 352.305 - Eligibility for detail.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RIGHTS Detail and Transfer of Federal Employees to International Organizations § 352.305 Eligibility for detail. An employee is eligible for detail to an international organization with the rights provided...

  13. Detailed investigations on radiative opacity and emissivity of tin plasmas in the extreme-ultraviolet region.

    PubMed

    Zeng, Jiaolong; Gao, Cheng; Yuan, Jianmin

    2010-08-01

    Radiative opacity and emissivity of tin plasmas at average ionization degree of about 10 was investigated in detail by using a fully relativistic detailed level accounting approach, in which main physical effects on the opacity were carefully taken into account. Among these physical effects, configuration interaction, in particular core-valence electron correlations, plays an important role on the determination of accurate atomic data required in the calculation of opacity. It results in a strong narrowing of lines from all transition arrays and strong absorption is located in a narrow wavelength region of 12.5-14 nm for Sn plasmas. Using a complete accurate atomic data, we investigated the opacity of Sn plasmas at a variety of physical condition. Among the respective ions of Xe6+-Xe15+ , Xe10+ has the largest absorption cross section at 13.5 nm, while the favorable physical condition for maximal absorption at 13.5 nm do not mean that Xe10+ has the largest fraction. Comparison with other theoretical results showed that a complete set of consistent accurate atomic data, which lacks very much, is essential to predict accurate opacity. Our atomic model is useful and can be applied to interpret opacity experiments. Further benchmark experiments are urgently needed to clarify the physical effects on the opacity of Sn plasmas. PMID:20866928

  14. Detailed Kinetic Modeling of Gasoline Surrogate Mixtures

    SciTech Connect

    Mehl, M; Curran, H J; Pitz, W J; Westbrook, C K

    2009-03-09

    Real fuels are complex mixtures of thousands of hydrocarbon compounds including linear and branched paraffins, naphthenes, olefins and aromatics. It is generally agreed that their behavior can be effectively reproduced by simpler fuel surrogates containing a limited number of components. In this work, a recently revised version of the kinetic model by the authors is used to analyze the combustion behavior of several components relevant to gasoline surrogate formulation. Particular attention is devoted to linear and branched saturated hydrocarbons (PRF mixtures), olefins (1-hexene) and aromatics (toluene). Model predictions for pure components, binary mixtures and multi-component gasoline surrogates are compared with recent experimental information collected in rapid compression machine, shock tube and jet stirred reactors covering a wide range of conditions pertinent to internal combustion engines. Simulation results are discussed focusing attention on the mixing effects of the fuel components.

  15. 11. Exterior detail view of northeast corner, showing stucco finish ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Exterior detail view of northeast corner, showing stucco finish and woodwork details - American Railway Express Company Freight Building, 1060 Northeast Division Street, Bend, Deschutes County, OR

  16. Detailed Burnup Calculations for Testing Nuclear Data

    NASA Astrophysics Data System (ADS)

    Leszczynski, F.

    2005-05-01

    A general method (MCQ) has been developed by introducing a microscopic burnup scheme that uses the Monte Carlo calculated fluxes and microscopic reaction rates of a complex system and a depletion code for burnup calculations as a basis for solving nuclide material balance equations for each spatial region in which the system is divided. Continuous energy-dependent cross-section libraries and full 3D geometry of the system can be input for the calculations. The resulting predictions for the system at successive burnup time steps are thus based on a calculation route where both geometry and cross sections are accurately represented, without geometry simplifications and with continuous energy data, providing an independent approach for benchmarking other methods and nuclear data of actinides, fission products, and other burnable absorbers. The main advantage of this method over the classical deterministic methods currently used is that the MCQ System is a direct 3D method without the limitations and errors introduced on the homogenization of geometry and condensation of energy of deterministic methods. The Monte Carlo and burnup codes adopted until now are the widely used MCNP and ORIGEN codes, but other codes can be used also. For using this method, there is need of a well-known set of nuclear data for isotopes involved in burnup chains, including burnable poisons, fission products, and actinides. For fixing the data to be included in this set, a study of the present status of nuclear data is performed, as part of the development of the MCQ method. This study begins with a review of the available cross-section data of isotopes involved in burnup chains for power and research nuclear reactors. The main data needs for burnup calculations are neutron cross sections, decay constants, branching ratios, fission energy, and yields. The present work includes results of selected experimental benchmarks and conclusions about the sensitivity of different sets of cross

  17. Concurrent and Accurate Short Read Mapping on Multicore Processors.

    PubMed

    Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S

    2015-01-01

    We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR. PMID:26451814

  18. HOW ACCURATE IS OUR KNOWLEDGE OF THE GALAXY BIAS?

    SciTech Connect

    More, Surhud

    2011-11-01

    Observations of the clustering of galaxies can provide useful information about the distribution of dark matter in the universe. In order to extract accurate cosmological parameters from galaxy surveys, it is important to understand how the distribution of galaxies is biased with respect to the matter distribution. The large-scale bias of galaxies can be quantified either by directly measuring the large-scale ({lambda} {approx}> 60 h{sup -1} Mpc) power spectrum of galaxies or by modeling the halo occupation distribution of galaxies using their clustering on small scales ({lambda} {approx}< 30 h{sup -1} Mpc). We compare the luminosity dependence of the galaxy bias (both the shape and the normalization) obtained by these methods and check for consistency. Our comparison reveals that the bias of galaxies obtained by the small-scale clustering measurements is systematically larger than that obtained from the large-scale power spectrum methods. We also find systematic discrepancies in the shape of the galaxy-bias-luminosity relation. We comment on the origin and possible consequences of these discrepancies which had remained unnoticed thus far.

  19. Slim hole MWD tool accurately measures downhole annular pressure

    SciTech Connect

    Burban, B.; Delahaye, T. )

    1994-02-14

    Measurement-while-drilling of downhole pressure accurately determines annular pressure losses from circulation and drillstring rotation and helps monitor swab and surge pressures during tripping. In early 1993, two slim-hole wells (3.4 in. and 3 in. diameter) were drilled with continuous real-time electromagnetic wave transmission of downhole temperature and annular pressure. The data were obtained during all stages of the drilling operation and proved useful for operations personnel. The use of real-time measurements demonstrated the characteristic hydraulic effects of pressure surges induced by drillstring rotation in the small slim-hole annulus under field conditions. The interest in this information is not restricted to the slim-hole geometry. Monitoring or estimating downhole pressure is a key element for drilling operations. Except in special cases, no real-time measurements of downhole annular pressure during drilling and tripping have been used on an operational basis. The hydraulic effects are significant in conventional-geometry wells (3 1/2-in. drill pipe in a 6-in. hole). This paper describes the tool and the results from the field test.

  20. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    NASA Astrophysics Data System (ADS)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  1. Evolving generalized Voronoi diagrams for accurate cellular image segmentation.

    PubMed

    Yu, Weimiao; Lee, Hwee Kuan; Hariharan, Srivats; Bu, Wenyu; Ahmed, Sohail

    2010-04-01

    Analyzing cellular morphologies on a cell-by-cell basis is vital for drug discovery, cell biology, and many other biological studies. Interactions between cells in their culture environments cause cells to touch each other in acquired microscopy images. Because of this phenomenon, cell segmentation is a challenging task, especially when the cells are of similar brightness and of highly variable shapes. The concept of topological dependence and the maximum common boundary (MCB) algorithm are presented in our previous work (Yu et al., Cytometry Part A 2009;75A:289-297). However, the MCB algorithm suffers a few shortcomings, such as low computational efficiency and difficulties in generalizing to higher dimensions. To overcome these limitations, we present the evolving generalized Voronoi diagram (EGVD) algorithm. Utilizing image intensity and geometric information, EGVD preserves topological dependence easily in both 2D and 3D images, such that touching cells can be segmented satisfactorily. A systematic comparison with other methods demonstrates that EGVD is accurate and much more efficient. PMID:20169588

  2. Novel Cortical Thickness Pattern for Accurate Detection of Alzheimer's Disease.

    PubMed

    Zheng, Weihao; Yao, Zhijun; Hu, Bin; Gao, Xiang; Cai, Hanshu; Moore, Philip

    2015-01-01

    Brain network occupies an important position in representing abnormalities in Alzheimer's disease (AD) and mild cognitive impairment (MCI). Currently, most studies only focused on morphological features of regions of interest without exploring the interregional alterations. In order to investigate the potential discriminative power of a morphological network in AD diagnosis and to provide supportive evidence on the feasibility of an individual structural network study, we propose a novel approach of extracting the correlative features from magnetic resonance imaging, which consists of a two-step approach for constructing an individual thickness network with low computational complexity. Firstly, multi-distance combination is utilized for accurate evaluation of between-region dissimilarity; and then the dissimilarity is transformed to connectivity via calculation of correlation function. An evaluation of the proposed approach has been conducted with 189 normal controls, 198 MCI subjects, and 163 AD patients using machine learning techniques. Results show that the observed correlative feature suggests significant promotion in classification performance compared with cortical thickness, with accuracy of 89.88% and area of 0.9588 under receiver operating characteristic curve. We further improved the performance by integrating both thickness and apolipoprotein E ɛ4 allele information with correlative features. New achieved accuracies are 92.11% and 79.37% in separating AD from normal controls and AD converters from non-converters, respectively. Differences between using diverse distance measurements and various correlation transformation functions are also discussed to explore an optimal way for network establishment. PMID:26444768

  3. Extremely Accurate On-Orbit Position Accuracy using TDRSS

    NASA Technical Reports Server (NTRS)

    Stocklin, Frank; Toral, Marco; Bar-Sever, Yoaz; Rush, John

    2006-01-01

    NASA is planning to launch a new service for Earth satellites providing them with precise GPS differential corrections and other ancillary information enabling decimeter level orbit determination accuracy and nanosecond time-transfer accuracy, onboard, in real-time. The TDRSS Augmentation Service for Satellites (TASS) will broadcast its message on the S-band multiple access forward channel of NASA s Tracking and Data Relay Satellite System (TDRSS). The satellite's phase array antenna has been configured to provide a wide beam, extending coverage up to 1000 km altitude over the poles. Global coverage will be ensured with broadcast from three or more TDRSS satellites. The GPS differential corrections are provided by the NASA Global Differential GPS (GDGPS) System, developed and operated by JPL. The GDGPS System employs global ground network of more than 70 GPS receivers to monitor the GPS constellation in real time. The system provides real-time estimates of the GPS satellite states, as well as many other real-time products such as differential corrections, global ionospheric maps, and integrity monitoring. The unique multiply redundant architecture of the GDGPS System ensures very high reliability, with 99.999% demonstrated since the inception of the system in early 2000. The estimated real time GPS orbit and clock states provided by the GDGPS system are accurate to better than 20 cm 3D RMS, and have been demonstrated to support sub-decimeter real time positioning and orbit determination for a variety of terrestrial, airborne, and spaceborne applications. In addition to the GPS differential corrections, TASS will provide real-time Earth orientation and solar flux information that enable precise onboard knowledge of the Earth-fixed position of the spacecraft, and precise orbit prediction and planning capabilities. TASS will also provide 5 seconds alarms for GPS integrity failures based on the unique GPS integrity monitoring service of the GDGPS System.

  4. [Approach to academic detailing as a hospital pharmacist].

    PubMed

    Nishikori, Atsumi

    2014-01-01

    In 2012, a new medical fee system was introduced for the clinical activities of hospital pharmacists responsible for in-patient pharmacotherapy monitoring in medical institutions in Japan. The new medical system demands greater efforts to provide the most suitable and safest medicine for each patient. By applying the concept of academic detailing to clinical pharmacists' roles in hospitals, I present drug use evaluation in three disease states (peptic ulcer, insomnia, and osteoporosis). To analyze these from multiple aspects, we not only need knowledge of drug monographs (clinical and adverse drug effects), but also the ability to evaluate a patient's adherence and cost-effectiveness. If we combine the idea of academic detailing with a clinical pharmacist's role, it is necessary to strengthen drug information skills, such as guideline or literature search skills and journal evaluation. Simultaneously, it is important to introduce new pharmaceutical education curriculums regarding evidence-based medicine (EBM), pharmacoeconomics, and professional communication in order to explore pharmacists' roles in the future. PMID:24584015

  5. Detailed HIkinematics of Tully-Fisher calibrator galaxies

    NASA Astrophysics Data System (ADS)

    Ponomareva, Anastasia A.; Verheijen, Marc A. W.; Bosma, Albert

    2016-09-01

    We present spatially-resolved HI kinematics of 32 spiral galaxies which have Cepheid or/and Tip of the Red Giant Branch distances, and define a calibrator sample for the Tully-Fisher relation. The interferometric HI data for this sample were collected from available archives and supplemented with new GMRT observations. This paper describes an uniform analysis of the HI kinematics of this inhomogeneous data set. Our main result is an atlas for our calibrator sample that presents global HI profiles, integrated HI column-density maps, HI surface density profiles and, most importantly, detailed kinematic information in the form of high-quality rotation curves derived from highly-resolved, two-dimensional velocity fields and position-velocity diagrams.

  6. A detailed spectroscopic study of an Italian fresco

    SciTech Connect

    Barilaro, Donatella; Crupi, Vincenza; Majolino, Domenico; Barone, Germana; Ponterio, Rosina

    2005-02-15

    In the present work we characterized samples of plasters and pictorial layers taken from a fresco in the Acireale Cathedral. The fresco represents the Coronation of Saint Venera, patron saint of this Ionian town. By performing a detailed spectroscopic analysis of the plaster preparation layer by Fourier-transform infrared (FTIR) spectroscopy and x-ray diffraction (XRD), and of the painting layer by FTIR and confocal Raman microspectroscopy, scanning electron microscopy+energy dispersive x-ray spectroscopy, and XRD, we were able to identify the pigments and the binders present. In particular, Raman investigation was crucial to the characterization of the pigments thanks to the high resolution of the confocal apparatus used. It is worth stressing that the simultaneous use of complementary techniques was able to provide more complete information for the conservation of the artifact we studied.

  7. Evaluation of Sensitivity and Robustness of Geothermal Resource Parameters Using Detailed and Approximate Stratigraphy

    NASA Astrophysics Data System (ADS)

    Whealton, C.; Jordan, T. E.; Frone, Z. S.; Smith, J. D.; Horowitz, F. G.; Stedinger, J. R.

    2015-12-01

    Accurate assessment of the spatial variation of geothermal heat is key to distinguishing among locations for geothermal project development. Resource assessment over large areas can be accelerated by using existing subsurface data collected for other purposes, such as petroleum industry bottom-hole temperature (BHT) datasets. BHT data are notoriously noisy but in many sedimentary basins their abundance offsets the potential low quality of an individual BHT measurement. Analysis requires description of conductivity stratigraphy, which for thousands of wells with BHT values is daunting. For regional assessment, a streamlined method is to approximate the thickness and conductivity of each formation using a set of standard columns rescaled to the sediment thickness at a location. Surface heat flow and related geothermal resource metrics are estimated from these and additional parameters. This study uses Monte Carlo techniques to compare the accuracy and precision of thermal predictions at single locations by the streamlined approach to well-specific conductivity stratigraphy. For 77 wells distributed across the Appalachian Basin of NY, PA, and WV, local geological experts made available detailed information on unit thicknesses . For the streamlined method we used the Correlation of Stratigraphic Units of North America (COSUNA) columns. For both data sets, we described thermal conductivity of the strata using generic values or values from the geologically similar Anadarko Basin. The well-specific surface heat flow and temperature-at-depth were evaluated using a one-dimensional conductive heat flow model. This research addresses the sensitivity of the estimated geothermal output to the model inputs (BHT, thermal conductivity) and the robustness of the approximate stratigraphic column assumptions when estimating the geothermal output. This research was conducted as part of the Dept. of Energy Geothermal Play Fairway Analysis program.

  8. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, S.A.; Killeen, K.P.; Lear, K.L.

    1995-03-14

    The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.

  9. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.

    1995-01-01

    We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.

  10. Information services and information processing

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Attempts made to design and extend space system capabilities are reported. Special attention was given to establishing user needs for information or services which might be provided by space systems. Data given do not attempt to detail scientific, technical, or economic bases for the needs expressed by the users.

  11. Detail-Preserving and Content-Aware Variational Multi-View Stereo Reconstruction.

    PubMed

    Li, Zhaoxin; Wang, Kuanquan; Zuo, Wangmeng; Meng, Deyu; Zhang, Lei

    2016-02-01

    Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo (MVS) reconstruction, many existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with less textures. To address these limitations, this paper presents a detail-preserving and content-aware variational (DCV) MVS method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware ℓp-minimization algorithm by adaptively estimating the p value and regularization parameters. Compared with conventional isotropic mesh smoothing approaches, the proposed method is much more promising in suppressing noise while preserving sharp features. Experimental results on benchmark data sets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than the state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse data sets in terms of both completeness and accuracy. PMID:26672037

  12. Detail-Preserving and Content-Aware Variational Multi-View Stereo Reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Zhaoxin; Wang, Kuanquan; Zuo, Wangmeng; Meng, Deyu; Zhang, Lei

    2016-02-01

    Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo reconstruction, most existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with few textures. To address these limitations, this paper presents a Detail-preserving and Content-aware Variational (DCV) multi-view stereo method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware $\\ell_{p}$-minimization algorithm by adaptively estimating the $p$ value and regularization parameters based on the current input. It is much more promising in suppressing noise while preserving sharp features than conventional isotropic mesh smoothing. Experimental results on benchmark datasets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse ring datasets in terms of both completeness and accuracy.

  13. Panel Post & Diagonal Brace Joint Detail; Crossbracing Center Joint ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Panel Post & Diagonal Brace Joint Detail; Crossbracing Center Joint Detail; Chord, Panel Post, Tie Bar, & Diagonal Brace Joint Detail; Chord, Tie Bar, & Crossbracing Joint Detail - Medora Bridge, Spanning East Fork of White River at State Route 235, Medora, Jackson County, IN

  14. 5 CFR 2635.104 - Applicability to employees on detail.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Applicability to employees on detail... Applicability to employees on detail. (a) Details to other agencies. Except as provided in paragraph (d) of this section, an employee on detail, including a uniformed officer on assignment, from his employing agency...

  15. 5 CFR 930.106 - Details in the competitive service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Details in the competitive service. 930... Operators § 930.106 Details in the competitive service. An agency may detail an employee to an operator... details exceeding 30 days, the employee must meet all the requirements of § 930.105 and any applicable...

  16. Infrared image detail enhancement approach based on improved joint bilateral filter

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Chen, Xiaohong

    2016-07-01

    In this paper, we proposed a new infrared image detail enhancement approach. This approach could not only achieve the goal of enhancing the digital detail, but also make the processed image much closer to the real situation. Inspired by the joint-bilateral filter, two adjacent images were utilized to calculate the kernel functions in order to distinguish the detail information from the raw image. We also designed a new kernel function to modify the joint-bilateral filter and to eliminate the gradient reversal artifacts caused by the non-linear filtering. The new kernel is based on an adaptive emerge coefficient to realize the detail layer determination. The detail information was modified by the adaptive emerge coefficient along with two key parameters to realize the detail enhancement. Finally, we combined the processed detail layer with the base layer and rearrange the high dynamic image into monitor-suited low dynamic range to achieve better visual effect. Numerical calculation showed that this new technology has the best value compare to the previous research in detail enhancement. Figures and data flowcharts were demonstrated in the paper.

  17. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  18. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  19. Detailed model of the thorax as a volume conductor based on the visible human man data.

    PubMed

    Kauppinen, P; Hyttinen, J; Heinonen, T; Malmivuo, J

    1998-01-01

    A large number of computerized conductivity models of the human thorax have been created to study bioelectric phenomena in human beings. Devised models have varied greatly in the level of anatomical detail incorporated thus restricting the accuracy and validity of conducted simulations. This paper introduces a highly detailed anatomically accurate three-dimensional computer model of the conductive anatomy of the human thorax for calculating electric fields generated by equivalent bioelectric sources and different externally applied sources. The anatomy of the devised model is based on high resolution colour cryosection images of the US National Library of Medicine's Visible Human Man data set and is comprised of more anatomical detail than prior computer models. The model is based on the finite difference method and is readily applicable for the analysis of a wide range of biomedical field problems, such as electrocardiography, impedance cardiography, tissue stimulations, and especially, in development of measurement systems. PMID:9667039

  20. Development of Detailed Kinetic Models for Fischer-Tropsch Fuels

    SciTech Connect

    Westbrook, C K; Pitz, W J; Carstensen, H; Dean, A M

    2008-10-28

    Fischer-Tropsch (FT) fuels can be synthesized from a syngas stream generated by the gasification of biomass. As such they have the potential to be a renewable hydrocarbon fuel with many desirable properties. However, both the chemical and physical properties are somewhat different from the petroleum-based hydrocarbons that they might replace, and it is important to account for such differences when considering using them as replacements for conventional fuels in devices such as diesel engines and gas turbines. FT fuels generally contain iso-alkanes with one or two substituted methyl groups to meet the pour-point specifications. Although models have been developed for smaller branched alkanes such as isooctane, additional efforts are required to properly capture the kinetics of the larger branched alkanes. Recently, Westbrook et al. developed a chemical kinetic model that can be used to represent the entire series of n-alkanes from C{sub 1} to C{sub 16} (Figure 1). In the current work, the model is extended to treat 2,2,4,4,6,8,8-heptamethylnonane (HMN), a large iso-alkane. The same reaction rate rules used in the iso-octane mechanism were incorporated in the HMN mechanism. Both high and low temperature chemistry was included so that the chemical kinetic model would be applicable to advanced internal combustion engines using low temperature combustion strategies. The chemical kinetic model consists of 1114 species and 4468 reactions. Concurrently with this effort, work is underway to improve the details of specific reaction classes in the mechanism, guided by high-level electronic structure calculations. Attention is focused upon development of accurate rate rules for abstraction of the tertiary hydrogens present in branched alkanes and properly accounting for the pressure dependence of the ?-scission, isomerization, and R + O{sub 2} reactions.

  1. The impact of model detail on power grid resilience measures

    NASA Astrophysics Data System (ADS)

    Auer, S.; Kleis, K.; Schultz, P.; Kurths, J.; Hellmann, F.

    2016-05-01

    Extreme events are a challenge to natural as well as man-made systems. For critical infrastructure like power grids, we need to understand their resilience against large disturbances. Recently, new measures of the resilience of dynamical systems have been developed in the complex system literature. Basin stability and survivability respectively assess the asymptotic and transient behavior of a system when subjected to arbitrary, localized but large perturbations in frequency and phase. To employ these methods that assess power grid resilience, we need to choose a certain model detail of the power grid. For the grid topology we considered the Scandinavian grid and an ensemble of power grids generated with a random growth model. So far the most popular model that has been studied is the classical swing equation model for the frequency response of generators and motors. In this paper we study a more sophisticated model of synchronous machines that also takes voltage dynamics into account, and compare it to the previously studied model. This model has been found to give an accurate picture of the long term evolution of synchronous machines in the engineering literature for post fault studies. We find evidence that some stable fix points of the swing equation become unstable when we add voltage dynamics. If this occurs the asymptotic behavior of the system can be dramatically altered, and basin stability estimates obtained with the swing equation can be dramatically wrong. We also find that the survivability does not change significantly when taking the voltage dynamics into account. Further, the limit cycle type asymptotic behaviour is strongly correlated with transient voltages that violate typical operational voltage bounds. Thus, transient voltage bounds are dominated by transient frequency bounds and play no large role for realistic parameters.

  2. 43 CFR 2.31 - What must a submitter include in a detailed Exemption 4 objection statement?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... include a specific and detailed discussion of why the information is a trade secret or, if the information is not a trade secret, the following three categories must be addressed (unless the bureau informs... the Government required the information to be submitted, and if so, how substantial competitive...

  3. 43 CFR 2.31 - What must a submitter include in a detailed Exemption 4 objection statement?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... include a specific and detailed discussion of why the information is a trade secret or, if the information is not a trade secret, the following three categories must be addressed (unless the bureau informs... the Government required the information to be submitted, and if so, how substantial competitive...

  4. Gigantic Cosmic Corkscrew Reveals New Details About Mysterious Microquasar

    NASA Astrophysics Data System (ADS)

    2004-10-01

    Image of SS 433: Red-and-Blue Line Shows Path of Constant-Speed Jets. Note Poor Match of Path to Image. CREDIT: Blundell & Bowler, NRAO/AUI/NSF SS 433 Same Image, With Colored Beads Representing Particle Ejections at Different Speeds. Particle Path Now Matches. CREDIT: Blundell & Bowler, NRAO/AUI/NSF Click Here for Page of Full-Sized Graphics The new VLA image shows two full turns of the jets' corkscrew on both sides of the core. Analyzing the image showed that if material came from the core at a constant speed, the jet paths would not accurately match the details of the image. "By simulating ejections at varying speeds, we were able to produce an exact match to the observed structure," Blundell explained. The scientists first did their match to one of the jets. "We then were stunned to see that the varying speeds that matched the structure of one jet also exactly reproduced the other jet's path," Blundell said. Matching the speeds in the two jets reproduced the observed structure even allowing for the fact that, because one jet is moving more nearly away from us than the other, it takes light longer to reach us from it, she added. The astrophysicists speculate that the changes in ejection speed may be caused by changes in the rate at which material is transferred from the companion star onto the accretion disk. The detailed new VLA image also allowed the astrophysicists to determine that SS 433 is nearly 18,000 light-years distant from Earth. Earlier estimates had the object, in the constellation Aquila, as near as 10,000 light-years. An accurate distance, the scientists said, now allows them to better determine the age of the shell of debris blown out by the supernova explosion that created the dense, compact object in the microquasar. Knowing the distance accurately also allows them to measure the actual brightness of the microquasar's components, and this, they said, improves their understanding of the physical processes at work in the system. The breakthrough image

  5. Analysis of information systems for hydropower operations: Executive summary

    NASA Technical Reports Server (NTRS)

    Sohn, R. L.; Becker, L.; Estes, J.; Simonett, D.; Yeh, W.

    1976-01-01

    An analysis was performed of the operations of hydropower systems, with emphasis on water resource management, to determine how aerospace derived information system technologies can effectively increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined in detail to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results were used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept was outlined.

  6. What Data to Use for Forest Conservation Planning? A Comparison of Coarse Open and Detailed Proprietary Forest Inventory Data in Finland

    PubMed Central

    Lehtomäki, Joona; Tuominen, Sakari; Toivonen, Tuuli; Leinonen, Antti

    2015-01-01

    The boreal region is facing intensifying resource extraction pressure, but the lack of comprehensive biodiversity data makes operative forest conservation planning difficult. Many countries have implemented forest inventory schemes and are making extensive and up-to-date forest databases increasingly available. Some of the more detailed inventory databases, however, remain proprietary and unavailable for conservation planning. Here, we investigate how well different open and proprietary forest inventory data sets suit the purpose of conservation prioritization in Finland. We also explore how much priorities are affected by using the less accurate but open data. First, we construct a set of indices for forest conservation value based on quantitative information commonly found in forest inventories. These include the maturity of the trees, tree species composition, and site fertility. Secondly, using these data and accounting for connectivity between forest types, we investigate the patterns in conservation priority. For prioritization, we use Zonation, a method and software for spatial conservation prioritization. We then validate the prioritizations by comparing them to known areas of high conservation value. We show that the overall priority patterns are relatively consistent across different data sources and analysis options. However, the coarse data cannot be used to accurately identify the high-priority areas as it misses much of the fine-scale variation in forest structures. We conclude that, while inventory data collected for forestry purposes may be useful for forest conservation purposes, it needs to be detailed enough to be able to account for more fine-scaled features of high conservation value. These results underline the importance of making detailed inventory data publicly available. Finally, we discuss how the prioritization methodology we used could be integrated into operative forest management, especially in countries in the boreal zone. PMID

  7. What Data to Use for Forest Conservation Planning? A Comparison of Coarse Open and Detailed Proprietary Forest Inventory Data in Finland.

    PubMed

    Lehtomäki, Joona; Tuominen, Sakari; Toivonen, Tuuli; Leinonen, Antti

    2015-01-01

    The boreal region is facing intensifying resource extraction pressure, but the lack of comprehensive biodiversity data makes operative forest conservation planning difficult. Many countries have implemented forest inventory schemes and are making extensive and up-to-date forest databases increasingly available. Some of the more detailed inventory databases, however, remain proprietary and unavailable for conservation planning. Here, we investigate how well different open and proprietary forest inventory data sets suit the purpose of conservation prioritization in Finland. We also explore how much priorities are affected by using the less accurate but open data. First, we construct a set of indices for forest conservation value based on quantitative information commonly found in forest inventories. These include the maturity of the trees, tree species composition, and site fertility. Secondly, using these data and accounting for connectivity between forest types, we investigate the patterns in conservation priority. For prioritization, we use Zonation, a method and software for spatial conservation prioritization. We then validate the prioritizations by comparing them to known areas of high conservation value. We show that the overall priority patterns are relatively consistent across different data sources and analysis options. However, the coarse data cannot be used to accurately identify the high-priority areas as it misses much of the fine-scale variation in forest structures. We conclude that, while inventory data collected for forestry purposes may be useful for forest conservation purposes, it needs to be detailed enough to be able to account for more fine-scaled features of high conservation value. These results underline the importance of making detailed inventory data publicly available. Finally, we discuss how the prioritization methodology we used could be integrated into operative forest management, especially in countries in the boreal zone. PMID

  8. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987

  9. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  10. Features versus Context: An approach for precise and detailed detection and delineation of faces and facial features

    PubMed Central

    Ding, Liya; Martinez, Aleix M.

    2013-01-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had a limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult however, because the context and the texture of the facial features vary widely under changing expression, pose and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive

  11. Detailed glycan structural characterization by electronic excitation dissociation.

    PubMed

    Yu, Xiang; Jiang, Yan; Chen, Yajie; Huang, Yiqun; Costello, Catherine E; Lin, Cheng

    2013-11-01

    The structural complexity and diversity of glycans parallel their multilateral functions in living systems. To better understand the vital roles glycans play in biological processes, it is imperative to develop analytical tools that can provide detailed glycan structural information. This was conventionally achieved by multistage tandem mass spectrometry (MS(n)) analysis using collision-induced dissociation (CID) as the fragmentation method. However, the MS(n) approach lacks the sensitivity and throughput needed to analyze complex glycan mixtures from biological sources, often available in limited quantities. We define herein the critical parameters for a recently developed fragmentation technique, electronic excitation dissociation (EED), which can yield rich structurally informative fragment ions during liquid chromatographic (LC)-MS/MS analysis of glycans. We further demonstrate that permethylation, reducing end labeling and judicious selection of the metal charge carrier, can greatly facilitate spectral interpretation. With its high sensitivity, throughput, and compatibility with online chromatographic separation techniques, EED appears to hold great promise for large-scale glycomics studies. PMID:24080071

  12. History and progress on accurate measurements of the Planck constant

    NASA Astrophysics Data System (ADS)

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  13. History and progress on accurate measurements of the Planck constant.

    PubMed

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  14. Shocking Detail of Superstar's Activity Revealed

    NASA Astrophysics Data System (ADS)

    1999-10-01

    polarization. Frequency is on 3880.0 megahertz, with audio on 6.8 megahertz. High resolution digital versions of the X-ray image (JPG, 300 dpi TIFF) and other information associated with this release are available on the Internet at: http://chandra.harvard.edu/photo/0099/index.html or via links in: http://chandra.harvard.edu

  15. Evaluation of freely available ancillary data used for detailed soil mapping in Brazil

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Anjos, Lúcia; Vasques, Gustavo; Heuvelink, Gerard

    2014-05-01

    Brazil is one of the world's largest food producers, and is home of both largest rainforest and largest supply of renewable fresh water on Earth. However, it lacks detailed soil information in extensive areas of the country. The best soil map covering the entire country was published at a scale of 1:5,000,000. Termination of governmental support for systematic soil mapping in the 1980's made detailed soil mapping of the whole country a very difficult task to accomplish. Nowadays, due to new user-driven demands (e.g. precision agriculture), most detailed soil maps are produced for small size areas. Many of them rely on as is freely available ancillary data, although their accuracy is usually not reported or unknown. Results from a validation exercise that we performed using ground control points from a small hilly catchment (20 km²) in Southern Brazil (-53.7995ºE, -29.6355ºN) indicate that most freely available ancillary data needs some type of correction before use. Georeferenced and orthorectified RapidEye imagery (recently acquired by the Brazilian government) has a horizontal accuracy (root-mean-square error, RMSE) of 37 m, which is worse than the value published in the metadata (32 m). Like any remote sensing imagery, RapidEye imagery needs to be correctly registered before its use for soil mapping. Topographic maps produced by the Brazilian Army and derived geological maps (scale of 1:25,000) have a horizontal accuracy of 65 m, which is more than four times the maximum value allowed by Brazilian legislation (15 m). Worse results were found for geological maps derived from 1:50,000 topographic maps (RMSE = 147 m), for which the maximum allowed value is 30 m. In most cases positional errors are of systematic origin and can be easily corrected (e.g., affine transformation). ASTER GDEM has many holes and is very noisy, making it of little use in the studied area. TOPODATA, which is SRTM kriged from originally 3 to 1 arc-second by the Brazilian National

  16. Accurate determination of relative metatarsal protrusion with a small intermetatarsal angle: a novel simplified method.

    PubMed

    Osher, Lawrence; Blazer, Marie Mantini; Buck, Stacie; Biernacki, Tomasz

    2014-01-01

    Several published studies have explained in detail how to measure relative metatarsal protrusion on the plain film anteroposterior pedal radiograph. These studies have demonstrated the utility of relative metatarsal protrusion measurement in that it correlates with distal forefoot deformity or pathologic features. The method currently preferred by practitioners in podiatric medicine and surgery often presents one with the daunting challenge of obtaining an accurate measurement when the intermetatarsal 1-2 angle is small. The present study illustrates a novel mathematical solution to this problem that is simple to master, relatively quick to perform, and yields accurate results. Our method was tested and proven by 4 trained observers with varying degrees of clinical skill who independently measured the same 10 radiographs. PMID:24933656

  17. The Devil is in the Details: Using X-Ray Computed Tomography to Develop Accurate 3D Grain Characteristics and Bed Structure Metrics for Gravel Bed Rivers

    NASA Astrophysics Data System (ADS)

    Voepel, H.; Hodge, R. A.; Leyland, J.; Sear, D. A.; Ahmed, S. I.

    2014-12-01

    Uncertainty for bedload estimates in gravel bed rivers is largely driven by our inability to characterize the arrangement and orientation of the sediment grains within the bed. The characteristics of the surface structure are produced by the water working of grains, which leads to structural differences in bedforms through differential patterns of grain sorting, packing, imbrication, mortaring and degree of bed armoring. Until recently the technical and logistical difficulties of characterizing the arrangement of sediment in 3D have prohibited a full understanding of how grains interact with stream flow and the feedback mechanisms that exist. Micro-focus X-ray CT has been used for non-destructive 3D imaging of grains within a series of intact sections of river bed taken from key morphological units (see Figure 1). Volume, center of mass, points of contact, protrusion and spatial orientation of individual surface grains are derived from these 3D images, which in turn, facilitates estimates of 3D static force properties at the grain-scale such as pivoting angles, buoyancy and gravity forces, and grain exposure. By aggregating representative samples of grain-scale properties of localized interacting sediment into overall metrics, we can compare and contrast bed stability at a macro-scale with respect to stream bed morphology. Understanding differences in bed stability through representative metrics derived at the grain-scale will ultimately lead to improved bedload estimates with reduced uncertainty and increased understanding of interactions between grain-scale properties on channel morphology. Figure 1. CT-Scans of a water worked gravel-filled pot. a. 3D rendered scan showing the outer mesh, and b. the same pot with the mesh removed. c. vertical change in porosity of the gravels sampled in 5mm volumes. Values are typical of those measured in the field and lab. d. 2-D slices through the gravels at 20% depth from surface (porosity = 0.35), and e. 75% depth from surface (porosity = 0.24), showing the presence of fine sediments 'mortaring' the larger gravels. f. shows a longitudinal slide from which pivot angle measurements can be determined for contact points between particles. g. Example of two particle extraction from the CT scan showing how particle contact areas can be measured (dark area).

  18. What Price Information.

    ERIC Educational Resources Information Center

    Hunter, Janne A.

    1984-01-01

    This essay considers problems with perceptions of the value of academic and public library information and thus with its marketing and pricing. Public perceptions of information, awareness of information services, value and cost of information, pricing details, and cooperation between libraries and providers of services are discussed. Seven…

  19. Characteristics of physicians targeted by the pharmaceutical industry to participate in e-detailing.

    PubMed

    Alkhateeb, Fadi M; Khanfar, Nile M; Doucette, William R; Loudon, David

    2009-01-01

    Electronic detailing (e-detailing) has been introduced in the last few years by the pharmaceutical industry as a new communication channel through which to promote pharmaceutical products to physicians. E-detailing involves using digital technology, such as Internet, video conferencing, and interactive voice response, by which drug companies target their marketing efforts toward specific physicians with pinpoint accuracy. A mail survey of 671 Iowa physicians was used to gather information about the physician characteristics and practice setting characteristics of those who are usually targeted by pharmaceutical companies to participate in e-detailing. A model is developed and tested to explain firms' targeting strategy for targeting physicians for e-detailing. PMID:19408179

  20. Leg mass characteristics of accurate and inaccurate kickers--an Australian football perspective.

    PubMed

    Hart, Nicolas H; Nimphius, Sophia; Cochrane, Jodie L; Newton, Robert U

    2013-01-01

    Athletic profiling provides valuable information to sport scientists, assisting in the optimal design of strength and conditioning programmes. Understanding the influence these physical characteristics may have on the generation of kicking accuracy is advantageous. The aim of this study was to profile and compare the lower limb mass characteristics of accurate and inaccurate Australian footballers. Thirty-one players were recruited from the Western Australian Football League to perform ten drop punt kicks over 20 metres to a player target. Players were separated into accurate (n = 15) and inaccurate (n = 16) groups, with leg mass characteristics assessed using whole body dual energy x-ray absorptiometry (DXA) scans. Accurate kickers demonstrated significantly greater relative lean mass (P ≤ 0.004) and significantly lower relative fat mass (P ≤ 0.024) across all segments of the kicking and support limbs, while also exhibiting significantly higher intra-limb lean-to-fat mass ratios for all segments across both limbs (P ≤ 0.009). Inaccurate kickers also produced significantly larger asymmetries between limbs than accurate kickers (P ≤ 0.028), showing considerably lower lean mass in their support leg. These results illustrate a difference in leg mass characteristics between accurate and inaccurate kickers, highlighting the potential influence these may have on technical proficiency of the drop punt. PMID:23687978