Sample records for accurately characterizing openmp

  1. Characterizing Task-Based OpenMP Programs

    PubMed Central

    Muddukrishna, Ananya; Jonsson, Peter A.; Brorsson, Mats

    2015-01-01

    Programmers struggle to understand performance of task-based OpenMP programs since profiling tools only report thread-based performance. Performance tuning also requires task-based performance in order to balance per-task memory hierarchy utilization against exposed task parallelism. We provide a cost-effective method to extract detailed task-based performance information from OpenMP programs. We demonstrate the utility of our method by quickly diagnosing performance problems and characterizing exposed task parallelism and per-task instruction profiles of benchmarks in the widely-used Barcelona OpenMP Tasks Suite. Programmers can tune performance faster and understand performance tradeoffs more effectively than existing tools by using our method to characterize task-based performance. PMID:25860023

  2. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  3. OpenMP 4.5 Validation and Verification Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pophale, Swaroop S; Bernholdt, David E; Hernandez, Oscar R

    2017-12-15

    OpenMP, a directive-based programming API, introduce directives for accelerator devices that programmers are starting to use more frequently in production codes. To make sure OpenMP directives work correctly across architectures, it is critical to have a mechanism that tests for an implementation's conformance to the OpenMP standard. This testing process can uncover ambiguities in the OpenMP specification, which helps compiler developers and users make a better use of the standard. We fill this gap through our validation and verification test suite that focuses on the offload directives available in OpenMP 4.5.

  4. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.

  5. Toward Enhancing OpenMP's Work-Sharing Directives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, B M; Huang, L; Jin, H

    2006-05-17

    OpenMP provides a portable programming interface for shared memory parallel computers (SMPs). Although this interface has proven successful for small SMPs, it requires greater flexibility in light of the steadily growing size of individual SMPs and the recent advent of multithreaded chips. In this paper, we describe two application development experiences that exposed these expressivity problems in the current OpenMP specification. We then propose mechanisms to overcome these limitations, including thread subteams and thread topologies. Thus, we identify language features that improve OpenMP application performance on emerging and large-scale platforms while preserving ease of programming.

  6. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is oftenmore » complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.« less

  7. Support of Multidimensional Parallelism in the OpenMP Programming Model

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele

    2003-01-01

    OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.

  8. Early Experiences Writing Performance Portable OpenMP 4 Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Hernandez, Oscar R

    In this paper, we evaluate the recently available directives in OpenMP 4 to parallelize a computational kernel using both the traditional shared memory approach and the newer accelerator targeting capabilities. In addition, we explore various transformations that attempt to increase application performance portability, and examine the expressiveness and performance implications of using these approaches. For example, we want to understand if the target map directives in OpenMP 4 improve data locality when mapped to a shared memory system, as opposed to the traditional first touch policy approach in traditional OpenMP. To that end, we use recent Cray and Intel compilersmore » to measure the performance variations of a simple application kernel when executed on the OLCF s Titan supercomputer with NVIDIA GPUs and the Beacon system with Intel Xeon Phi accelerators attached. To better understand these trade-offs, we compare our results from traditional OpenMP shared memory implementations to the newer accelerator programming model when it is used to target both the CPU and an attached heterogeneous device. We believe the results and lessons learned as presented in this paper will be useful to the larger user community by providing guidelines that can assist programmers in the development of performance portable code.« less

  9. Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster

    NASA Technical Reports Server (NTRS)

    Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland

    2003-01-01

    In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.

  10. Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster

    NASA Technical Reports Server (NTRS)

    Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.

  11. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Alok; Li, Lingda; Kong, Martin

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less

  12. Employing Nested OpenMP for the Parallelization of Multi-Zone Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele

    2004-01-01

    In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.

  13. The OpenMP Implementation of NAS Parallel Benchmarks and its Performance

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry

    1999-01-01

    As the new ccNUMA architecture became popular in recent years, parallel programming with compiler directives on these machines has evolved to accommodate new needs. In this study, we examine the effectiveness of OpenMP directives for parallelizing the NAS Parallel Benchmarks. Implementation details will be discussed and performance will be compared with the MPI implementation. We have demonstrated that OpenMP can achieve very good results for parallelization on a shared memory system, but effective use of memory and cache is very important.

  14. Using OpenMP vs. Threading Building Blocks for Medical Imaging on Multi-cores

    NASA Astrophysics Data System (ADS)

    Kegel, Philipp; Schellmann, Maraike; Gorlatch, Sergei

    We compare two parallel programming approaches for multi-core systems: the well-known OpenMP and the recently introduced Threading Building Blocks (TBB) library by Intel®. The comparison is made using the parallelization of a real-world numerical algorithm for medical imaging. We develop several parallel implementations, and compare them w.r.t. programming effort, programming style and abstraction, and runtime performance. We show that TBB requires a considerable program re-design, whereas with OpenMP simple compiler directives are sufficient. While TBB appears to be less appropriate for parallelizing existing implementations, it fosters a good programming style and higher abstraction level for newly developed parallel programs. Our experimental measurements on a dual quad-core system demonstrate that OpenMP slightly outperforms TBB in our implementation.

  15. Effective Vectorization with OpenMP 4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham

    This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less

  16. OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms

    DOE PAGES

    Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...

    2016-09-21

    In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less

  17. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.

  18. OpenMP performance for benchmark 2D shallow water equations using LBM

    NASA Astrophysics Data System (ADS)

    Sabri, Khairul; Rabbani, Hasbi; Gunawan, Putu Harry

    2018-03-01

    Shallow water equations or commonly referred as Saint-Venant equations are used to model fluid phenomena. These equations can be solved numerically using several methods, like Lattice Boltzmann method (LBM), SIMPLE-like Method, Finite Difference Method, Godunov-type Method, and Finite Volume Method. In this paper, the shallow water equation will be approximated using LBM or known as LABSWE and will be simulated in performance of parallel programming using OpenMP. To evaluate the performance between 2 and 4 threads parallel algorithm, ten various number of grids Lx and Ly are elaborated. The results show that using OpenMP platform, the computational time for solving LABSWE can be decreased. For instance using grid sizes 1000 × 500, the speedup of 2 and 4 threads is observed 93.54 s and 333.243 s respectively.

  19. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented

  20. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.

  1. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  2. Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP

    NASA Technical Reports Server (NTRS)

    Norton, C.; Balsara, D.

    1999-01-01

    Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.

  3. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    NASA Astrophysics Data System (ADS)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  4. Issues Identified During September 2016 IBM OpenMP 4.5 Hackathon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, David F.

    In September, 2016 IBM hosted an OpenMP 4.5 Hackathon at the TJ Watson Research Center. Teams from LLNL, ORNL, SNL, LANL, and LBNL attended the event. As with the 2015 hackathon, IBM produced an extremely useful and successful event with unmatched support from compiler team, applications staff, and facilities. Approximately 24 IBM staff supported 4-day hackathon and spent significant time 4-6 weeks out to prepare environment and become familiar with apps. This hackathon was also the first event to feature LLVM & XL C/C++ and Fortran compilers. This report records many of the issues encountered by the LLNL teams duringmore » the hackathon.« less

  5. Parallel processing implementation for the coupled transport of photons and electrons using OpenMP

    NASA Astrophysics Data System (ADS)

    Doerner, Edgardo

    2016-05-01

    In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.

  6. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Quantitative Phase Microscopy for Accurate Characterization of Microlens Arrays

    NASA Astrophysics Data System (ADS)

    Grilli, Simonetta; Miccio, Lisa; Merola, Francesco; Finizio, Andrea; Paturzo, Melania; Coppola, Sara; Vespini, Veronica; Ferraro, Pietro

    Microlens arrays are of fundamental importance in a wide variety of applications in optics and photonics. This chapter deals with an accurate digital holography-based characterization of both liquid and polymeric microlenses fabricated by an innovative pyro-electrowetting process. The actuation of liquid and polymeric films is obtained through the use of pyroelectric charges generated into polar dielectric lithium niobate crystals.

  8. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  9. GPU acceleration of a petascale application for turbulent mixing at high Schmidt number using OpenMP 4.5

    NASA Astrophysics Data System (ADS)

    Clay, M. P.; Buaria, D.; Yeung, P. K.; Gotoh, T.

    2018-07-01

    This paper reports on the successful implementation of a massively parallel GPU-accelerated algorithm for the direct numerical simulation of turbulent mixing at high Schmidt number. The work stems from a recent development (Comput. Phys. Commun., vol. 219, 2017, 313-328), in which a low-communication algorithm was shown to attain high degrees of scalability on the Cray XE6 architecture when overlapping communication and computation via dedicated communication threads. An even higher level of performance has now been achieved using OpenMP 4.5 on the Cray XK7 architecture, where on each node the 16 integer cores of an AMD Interlagos processor share a single Nvidia K20X GPU accelerator. In the new algorithm, data movements are minimized by performing virtually all of the intensive scalar field computations in the form of combined compact finite difference (CCD) operations on the GPUs. A memory layout in departure from usual practices is found to provide much better performance for a specific kernel required to apply the CCD scheme. Asynchronous execution enabled by adding the OpenMP 4.5 NOWAIT clause to TARGET constructs improves scalability when used to overlap computation on the GPUs with computation and communication on the CPUs. On the 27-petaflops supercomputer Titan at Oak Ridge National Laboratory, USA, a GPU-to-CPU speedup factor of approximately 5 is consistently observed at the largest problem size of 81923 grid points for the scalar field computed with 8192 XK7 nodes.

  10. What Multilevel Parallel Programs do when you are not Watching: A Performance Analysis Case Study Comparing MPI/OpenMP, MLP, and Nested OpenMP

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.

  11. Analysis OpenMP performance of AMD and Intel architecture for breaking waves simulation using MPS

    NASA Astrophysics Data System (ADS)

    Alamsyah, M. N. A.; Utomo, A.; Gunawan, P. H.

    2018-03-01

    Simulation of breaking waves by using Navier-Stokes equation via moving particle semi-implicit method (MPS) over close domain is given. The results show the parallel computing on multicore architecture using OpenMP platform can reduce the computational time almost half of the serial time. Here, the comparison using two computer architectures (AMD and Intel) are performed. The results using Intel architecture is shown better than AMD architecture in CPU time. However, in efficiency, the computer with AMD architecture gives slightly higher than the Intel. For the simulation by 1512 number of particles, the CPU time using Intel and AMD are 12662.47 and 28282.30 respectively. Moreover, the efficiency using similar number of particles, AMD obtains 50.09 % and Intel up to 49.42 %.

  12. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    NASA Astrophysics Data System (ADS)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary solutions, respectively. The present OpenMP programs are designed for computers with multi-core processors and optimized for compiling with both commercially-licensed Intel Fortran and popular free open-source GNU Fortran compiler. The programs are easy to use and are elaborated with helpful comments for the users. All input parameters are listed at the beginning of each program. Different output files provide physical quantities such as energy, chemical potential, root-mean-square sizes, densities, etc. We also present speedup test results for new versions of the programs. Program files doi:http://dx.doi.org/10.17632/y8zk3jgn84.2 Licensing provisions: Apache License 2.0 Programming language: OpenMP GNU and Intel Fortran 90. Computer: Any multi-core personal computer or workstation with the appropriate OpenMP-capable Fortran compiler installed. Number of processors used: All available CPU cores on the executing computer. Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1888; ibid.204 (2016) 209. Does the new version supersede the previous version?: Not completely. It does supersede previous Fortran programs from both references above, but not OpenMP C programs from Comput. Phys. Commun. 204 (2016) 209. Nature of problem: The present Open Multi-Processing (OpenMP) Fortran programs, optimized for use with commercially-licensed Intel Fortran and free open-source GNU Fortran compilers, solve the time-dependent nonlinear partial differential (GP) equation for a trapped Bose-Einstein condensate in one (1d), two (2d), and three (3d) spatial dimensions for

  13. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  14. Parameters analysis of a porous medium model for treatment with hyperthermia using OpenMP

    NASA Astrophysics Data System (ADS)

    Freitas Reis, Ruy; dos Santos Loureiro, Felipe; Lobosco, Marcelo

    2015-09-01

    Cancer is the second cause of death in the world so treatments have been developed trying to work around this world health problem. Hyperthermia is not a new technique, but its use in cancer treatment is still at early stage of development. This treatment is based on overheat the target area to a threshold temperature that causes cancerous cell necrosis and apoptosis. To simulate this phenomenon using magnetic nanoparticles in an under skin cancer treatment, a three-dimensional porous medium model was adopted. This study presents a sensibility analysis of the model parameters such as the porosity and blood velocity. To ensure a second-order solution approach, a 7-points centered finite difference method was used for space discretization while a predictor-corrector method was used to time evolution. Due to the massive computations required to find the solution of a three-dimensional model, this paper also presents a first attempt to improve performance using OpenMP, a parallel programming API.

  15. Develop Accurate Methods for Characterizing and Quantifying Cohesive Sediment Erosion Under Combined Current-Wave Conditions

    DTIC Science & Technology

    2017-09-01

    ER D C/ CH L TR -1 7- 15 Strategic Environmental Research and Development Program Develop Accurate Methods for Characterizing and...current environments. This research will provide more accurate methods for assessing contaminated sediment stability for many DoD and Environmental...47.88026 pascals yards 0.9144 meters ERDC/CHL TR-17-15 xi Executive Summary Objective The proposed research goal is to develop laboratory methods

  16. SPEX: a highly accurate spectropolarimeter for atmospheric aerosol characterization

    NASA Astrophysics Data System (ADS)

    Rietjens, J. H. H.; Smit, J. M.; di Noia, A.; Hasekamp, O. P.; van Harten, G.; Snik, F.; Keller, C. U.

    2017-11-01

    Global characterization of atmospheric aerosol in terms of the microphysical properties of the particles is essential for understanding the role aerosols in Earth climate [1]. For more accurate predictions of future climate the uncertainties of the net radiative forcing of aerosols in the Earth's atmosphere must be reduced [2]. Essential parameters that are needed as input in climate models are not only the aerosol optical thickness (AOT), but also particle specific properties such as the aerosol mean size, the single scattering albedo (SSA) and the complex refractive index. The latter can be used to discriminate between absorbing and non-absorbing aerosol types, and between natural and anthropogenic aerosol. Classification of aerosol types is also very important for air-quality and health-related issues [3]. Remote sensing from an orbiting satellite platform is the only way to globally characterize atmospheric aerosol at a relevant timescale of 1 day [4]. One of the few methods that can be employed for measuring the microphysical properties of aerosols is to observe both radiance and degree of linear polarization of sunlight scattered in the Earth atmosphere under different viewing directions [5][6][7]. The requirement on the absolute accuracy of the degree of linear polarization PL is very stringent: the absolute error in PL must be smaller then 0.001+0.005.PL in order to retrieve aerosol parameters with sufficient accuracy to advance climate modelling and to enable discrimination of aerosol types based on their refractive index for air-quality studies [6][7]. In this paper we present the SPEX instrument, which is a multi-angle spectropolarimeter that can comply with the polarimetric accuracy needed for characterizing aerosols in the Earth's atmosphere. We describe the implementation of spectral polarization modulation in a prototype instrument of SPEX and show results of ground based measurements from which aerosol microphysical properties are retrieved.

  17. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    DOE PAGES

    Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...

    2013-01-01

    Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less

  18. Develop Accurate Methods for Characterizing And Quantifying Cohesive Sediment Erosion Under Combined Current Wave Conditions: Project ER 1497

    DTIC Science & Technology

    2017-09-01

    ER D C/ CH L TR -1 7- 15 Strategic Environmental Research and Development Program Develop Accurate Methods for Characterizing and...current environments. This research will provide more accurate methods for assessing contaminated sediment stability for many DoD and Environmental...47.88026 pascals yards 0.9144 meters ERDC/CHL TR-17-15 xi Executive Summary Objective The proposed research goal is to develop laboratory methods

  19. Data preprocessing for determining outer/inner parallelization in the nested loop problem using OpenMP

    NASA Astrophysics Data System (ADS)

    Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.

    2017-07-01

    Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.

  20. Accurate mode characterization of two-mode optical fibers by in-fiber acousto-optics.

    PubMed

    Alcusa-Sáez, E; Díez, A; Andrés, M V

    2016-03-07

    Acousto-optic interaction in optical fibers is exploited for the accurate and broadband characterization of two-mode optical fibers. Coupling between LP 01 and LP 1m modes is produced in a broadband wavelength range. Difference in effective indices, group indices, and chromatic dispersions between the guided modes, are obtained from experimental measurements. Additionally, we show that the technique is suitable to investigate the fine modes structure of LP modes, and some other intriguing features related with modes' cut-off.

  1. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.

    PubMed

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-01

    %. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.

  2. Towards Accurate Application Characterization for Exascale (APEX)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Simon David

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns.more » Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.« less

  3. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  4. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    NASA Astrophysics Data System (ADS)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-11-01

    Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of  ≈1.14 mm for the simulated PET detector and  ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.

  5. ACCURATE SPECTROSCOPIC CHARACTERIZATION OF PROTONATED OXIRANE: A POTENTIAL PREBIOTIC SPECIES IN TITAN'S ATMOSPHERE.

    PubMed

    Puzzarini, Cristina; Ali, Ashraf; Biczysko, Malgorzata; Barone, Vincenzo

    2014-09-10

    An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm -1 for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.

  6. Highly accurate apparatus for electrochemical characterization of the felt electrodes used in redox flow batteries

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon

    2016-04-01

    Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.

  7. Accurate spectroscopic characterization of oxirane: A valuable route to its identification in Titan's atmosphere and the assignment of unidentified infrared bands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puzzarini, Cristina; Biczysko, Malgorzata; Bloino, Julien

    2014-04-20

    In an effort to provide an accurate spectroscopic characterization of oxirane, state-of-the-art computational methods and approaches have been employed to determine highly accurate fundamental vibrational frequencies and rotational parameters. Available experimental data were used to assess the reliability of our computations, and an accuracy on average of 10 cm{sup –1} for fundamental transitions as well as overtones and combination bands has been pointed out. Moving to rotational spectroscopy, relative discrepancies of 0.1%, 2%-3%, and 3%-4% were observed for rotational, quartic, and sextic centrifugal-distortion constants, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be usefulmore » for identification of oxirane in Titan's atmosphere and the assignment of unidentified infrared bands. Since oxirane was already observed in the interstellar medium and some astronomical objects are characterized by very high D/H ratios, we also considered the accurate determination of the spectroscopic parameters for the mono-deuterated species, oxirane-d1. For the latter, an empirical scaling procedure allowed us to improve our computed data and to provide predictions for rotational transitions with a relative accuracy of about 0.02% (i.e., an uncertainty of about 40 MHz for a transition lying at 200 GHz).« less

  8. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    PubMed

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  9. Fourier Transform Mass Spectrometry and Nuclear Magnetic Resonance Analysis for the Rapid and Accurate Characterization of Hexacosanoylceramide.

    PubMed

    Ross, Charles W; Simonsick, William J; Bogusky, Michael J; Celikay, Recep W; Guare, James P; Newton, Randall C

    2016-06-28

    Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI), sheath flow electrospray ionization (ESI) Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS) and high-field nuclear magnetic resonance (NMR) analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.

  10. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. Moremore » recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.« less

  11. Accurate structural and spectroscopic characterization of prebiotic molecules: The neutral and cationic acetyl cyanide and their related species.

    PubMed

    Bellili, A; Linguerri, R; Hochlaf, M; Puzzarini, C

    2015-11-14

    In an effort to provide an accurate structural and spectroscopic characterization of acetyl cyanide, its two enolic isomers and the corresponding cationic species, state-of-the-art computational methods, and approaches have been employed. The coupled-cluster theory including single and double excitations together with a perturbative treatment of triples has been used as starting point in composite schemes accounting for extrapolation to the complete basis-set limit as well as core-valence correlation effects to determine highly accurate molecular structures, fundamental vibrational frequencies, and rotational parameters. The available experimental data for acetyl cyanide allowed us to assess the reliability of our computations: structural, energetic, and spectroscopic properties have been obtained with an overall accuracy of about, or better than, 0.001 Å, 2 kcal/mol, 1-10 MHz, and 11 cm(-1) for bond distances, adiabatic ionization potentials, rotational constants, and fundamental vibrational frequencies, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be useful for guiding future experimental investigations and/or astronomical observations.

  12. An X-band waveguide measurement technique for the accurate characterization of materials with low dielectric loss permittivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Kenneth W., E-mail: kenneth.allen@gtri.gatech.edu; Scott, Mark M.; Reid, David R.

    In this work, we present a new X-band waveguide (WR90) measurement method that permits the broadband characterization of the complex permittivity for low dielectric loss tangent material specimens with improved accuracy. An electrically long polypropylene specimen that partially fills the cross-section is inserted into the waveguide and the transmitted scattering parameter (S{sub 21}) is measured. The extraction method relies on computational electromagnetic simulations, coupled with a genetic algorithm, to match the experimental S{sub 21} measurement. The sensitivity of the technique to sample length was explored by simulating specimen lengths from 2.54 to 15.24 cm, in 2.54 cm increments. Analysis ofmore » our simulated data predicts the technique will have the sensitivity to measure loss tangent values on the order of 10{sup −3} for materials such as polymers with relatively low real permittivity values. The ability to accurately characterize low-loss dielectric material specimens of polypropylene is demonstrated experimentally. The method was validated by excellent agreement with a free-space focused-beam system measurement of a polypropylene sheet. This technique provides the material measurement community with the ability to accurately extract material properties of low-loss material specimen over the entire X-band range. This technique could easily be extended to other frequency bands.« less

  13. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    NASA Astrophysics Data System (ADS)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  14. Phase rainbow refractometry for accurate droplet variation characterization.

    PubMed

    Wu, Yingchun; Promvongsa, Jantarat; Saengkaew, Sawitree; Wu, Xuecheng; Chen, Jia; Gréhan, Gérard

    2016-10-15

    We developed a one-dimensional phase rainbow refractometer for the accurate trans-dimensional measurements of droplet size on the micrometer scale as well as the tiny droplet diameter variations at the nanoscale. The dependence of the phase shift of the rainbow ripple structures on the droplet variations is revealed. The phase-shifting rainbow image is recorded by a telecentric one-dimensional rainbow imaging system. Experiments on the evaporating monodispersed droplet stream show that the phase rainbow refractometer can measure the tiny droplet diameter changes down to tens of nanometers. This one-dimensional phase rainbow refractometer is capable of measuring the droplet refractive index and diameter, as well as variations.

  15. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  16. Combining angular differential imaging and accurate polarimetry with SPHERE/IRDIS to characterize young giant exoplanets

    NASA Astrophysics Data System (ADS)

    van Holstein, Rob G.; Snik, Frans; Girard, Julien H.; de Boer, Jozua; Ginski, C.; Keller, Christoph U.; Stam, Daphne M.; Beuzit, Jean-Luc; Mouillet, David; Kasper, Markus; Langlois, Maud; Zurlo, Alice; de Kok, Remco J.; Vigan, Arthur

    2017-09-01

    Young giant exoplanets emit infrared radiation that can be linearly polarized up to several percent. This linear polarization can trace: 1) the presence of atmospheric cloud and haze layers, 2) spatial structure, e.g. cloud bands and rotational flattening, 3) the spin axis orientation and 4) particle sizes and cloud top pressure. We introduce a novel high-contrast imaging scheme that combines angular differential imaging (ADI) and accurate near-infrared polarimetry to characterize self-luminous giant exoplanets. We implemented this technique at VLT/SPHEREIRDIS and developed the corresponding observing strategies, the polarization calibration and the data-reduction approaches. The combination of ADI and polarimetry is challenging, because the field rotation required for ADI negatively affects the polarimetric performance. By combining ADI and polarimetry we can characterize planets that can be directly imaged with a very high signal-to-noise ratio. We use the IRDIS pupil-tracking mode and combine ADI and principal component analysis to reduce speckle noise. We take advantage of IRDIS' dual-beam polarimetric mode to eliminate differential effects that severely limit the polarimetric sensitivity (flat-fielding errors, differential aberrations and seeing), and thus further suppress speckle noise. To correct for instrumental polarization effects, we apply a detailed Mueller matrix model that describes the telescope and instrument and that has an absolute polarimetric accuracy <= 0.1%. Using this technique we have observed the planets of HR 8799 and the (sub-stellar) companion PZ Tel B. Unfortunately, we do not detect a polarization signal in a first analysis. We estimate preliminary 1σ upper limits on the degree of linear polarization of ˜ 1% and ˜ 0.1% for the planets of HR 8799 and PZ Tel B, respectively. The achieved sub-percent sensitivity and accuracy show that our technique has great promise for characterizing exoplanets through direct-imaging polarimetry

  17. Comprehensive identification and structural characterization of target components from Gelsemium elegans by high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry based on accurate mass databases combined with MS/MS spectra.

    PubMed

    Liu, Yan-Chun; Xiao, Sa; Yang, Kun; Ling, Li; Sun, Zhi-Liang; Liu, Zhao-Ying

    2017-06-01

    This study reports an applicable analytical strategy of comprehensive identification and structure characterization of target components from Gelsemium elegans by using high-performance liquid chromatography quadrupole time-of-flight mass spectrometry (LC-QqTOF MS) based on the use of accurate mass databases combined with MS/MS spectra. The databases created included accurate masses and elemental compositions of 204 components from Gelsemium and their structural data. The accurate MS and MS/MS spectra were acquired through data-dependent auto MS/MS mode followed by an extraction of the potential compounds from the LC-QqTOF MS raw data of the sample. The same was matched using the databases to search for targeted components in the sample. The structures for detected components were tentatively characterized by manually interpreting the accurate MS/MS spectra for the first time. A total of 57 components have been successfully detected and structurally characterized from the crude extracts of G. elegans, but has failed to differentiate some isomers. This analytical strategy is generic and efficient, avoids isolation and purification procedures, enables a comprehensive structure characterization of target components of Gelsemium and would be widely applicable for complicated mixtures that are derived from Gelsemium preparations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a largemore » number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and

  19. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  20. Accurate tissue characterization in low-dose CT imaging with pure iterative reconstruction.

    PubMed

    Murphy, Kevin P; McLaughlin, Patrick D; Twomey, Maria; Chan, Vincent E; Moloney, Fiachra; Fung, Adrian J; Chan, Faimee E; Kao, Tafline; O'Neill, Siobhan B; Watson, Benjamin; O'Connor, Owen J; Maher, Michael M

    2017-04-01

    We assess the ability of low-dose hybrid iterative reconstruction (IR) and 'pure' model-based IR (MBIR) images to maintain accurate Hounsfield unit (HU)-determined tissue characterization. Standard-protocol (SP) and low-dose modified-protocol (MP) CTs were contemporaneously acquired in 34 Crohn's disease patients referred for CT. SP image reconstruction was via the manufacturer's recommendations (60% FBP, filtered back projection; 40% ASiR, Adaptive Statistical iterative Reconstruction; SP-ASiR40). MP data sets underwent four reconstructions (100% FBP; 40% ASiR; 70% ASiR; MBIR). Three observers measured tissue volumes using HU thresholds for fat, soft tissue and bone/contrast on each data set. Analysis was via SPSS. Inter-observer agreement was strong for 1530 datapoints (rs > 0.9). MP-MBIR tissue volume measurement was superior to other MP reconstructions and closely correlated with the reference SP-ASiR40 images for all tissue types. MP-MBIR superiority was most marked for fat volume calculation - close SP-ASiR40 and MP-MBIR Bland-Altman plot correlation was seen with the lowest average difference (336 cm 3 ) when compared with other MP reconstructions. Hounsfield unit-determined tissue volume calculations from MP-MBIR images resulted in values comparable to SP-ASiR40 calculations and values that are superior to MP-ASiR images. Accuracy of estimation of volume of tissues (e.g. fat) using segmentation software on low-dose CT images appears optimal when reconstructed with pure IR. © 2016 The Royal Australian and New Zealand College of Radiologists.

  1. BioFVM: an efficient, parallelized diffusive transport solver for 3-D biological simulations

    PubMed Central

    Ghaffarizadeh, Ahmadreza; Friedman, Samuel H.; Macklin, Paul

    2016-01-01

    Motivation: Computational models of multicellular systems require solving systems of PDEs for release, uptake, decay and diffusion of multiple substrates in 3D, particularly when incorporating the impact of drugs, growth substrates and signaling factors on cell receptors and subcellular systems biology. Results: We introduce BioFVM, a diffusive transport solver tailored to biological problems. BioFVM can simulate release and uptake of many substrates by cell and bulk sources, diffusion and decay in large 3D domains. It has been parallelized with OpenMP, allowing efficient simulations on desktop workstations or single supercomputer nodes. The code is stable even for large time steps, with linear computational cost scalings. Solutions are first-order accurate in time and second-order accurate in space. The code can be run by itself or as part of a larger simulator. Availability and implementation: BioFVM is written in C ++ with parallelization in OpenMP. It is maintained and available for download at http://BioFVM.MathCancer.org and http://BioFVM.sf.net under the Apache License (v2.0). Contact: paul.macklin@usc.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26656933

  2. Accurate phylogenetic classification of DNA fragments based onsequence composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequencemore » characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.« less

  3. Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology

    NASA Astrophysics Data System (ADS)

    van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.

    2013-07-01

    The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.

  4. SU-E-T-112: Experimental Characterization of a Novel Thermal Reservoir for Consistent and Accurate Annealing of High-Sensitivity TLDs.

    PubMed

    Donahue, W; Bongiorni, P; Hearn, R; Rodgers, J; Nath, R; Chen, Z

    2012-06-01

    To develop and characterize a novel thermal reservoir for consistent and accurate annealing of high-sensitivity thermoluminescence dosimeters (TLD-100H) for dosimetry of brachytherapy sources. The sensitivity of TLD-100H is about 18 times that of TLD-100 which has clear advantages in for interstitial brachytherapy sources. However, the TLD-100H requires a short high temperature annealing cycle (15 min.) and opening and closing the oven door causes significant temperature fluctuations leading to unreliable measurements. A new thermal reservoir made of aluminum alloy was developed to provide stable temperature environment in a standard hot air oven. The thermal reservoir consisted of a 20 cm × 20 cm × 8 cm Al block with a machine-milled chamber in the middle to house the aluminum TLD holding tray. The thermal reservoir was placed inside the oven until it reaches thermal equilibrium with oven chamber. The temperatures of the oven chamber, heat reservoir, and TLD holding tray were monitored by two independent thermo-couples which interfaced digitally to a control computer. A LabView interface was written for monitoring and recording the temperatures in TLD holding tray, the thermal reservoir, and oven chamber. The temperature profiles were measured as a function of oven-door open duration. The settings for oven chamber temperature and oven door open-close duration were optimized to achieve a stable temperature of 240 0C in the TLD holding tray. Complete temperature profiles of the TLD annealing tray over the entire annealing process were obtained. A LabView interface was written for monitoring and recording the temperatures in TLD holding The use of the thermal reservoir has significantly reduced the temperature fluctuations caused by the opening of oven door when inserting the TLD holding tray into the oven chamber. It has enabled consistent annealing of high-sensitivity TLDs. A comprehensive characterization of a custom-built novel thermal reservoir for annealing

  5. Accurate Characterization of Benign and Cancerous Breast Tissues: Aspecific Patient Studies using Piezoresistive Microcantilevers

    PubMed Central

    PANDYA, HARDIK J.; ROY, RAJARSHI; CHEN, WENJIN; CHEKMAREVA, MARINA A.; FORAN, DAVID J.; DESAI, JAYDEV P.

    2014-01-01

    Breast cancer is the largest detected cancer amongst women in the US. In this work, our team reports on the development of piezoresistive microcantilevers (PMCs) to investigate their potential use in the accurate detection and characterization of benign and diseased breast tissues by performing indentations on the micro-scale tissue specimens. The PMCs used in these experiments have been fabricated using laboratory-made silicon-on-insulator (SOI) substrate, which significantly reduces the fabrication costs. The PMCs are 260 μm long, 35 μm wide and 2 μm thick with resistivity of order 1.316 X 10−3 Ω-cm obtained by using boron diffusion technique. For indenting the tissue, we utilized 8 μm thick cylindrical SU-8 tip. The PMC was calibrated against a known AFM probe. Breast tissue cores from seven different specimens were indented using PMC to identify benign and cancerous tissue cores. Furthermore, field emission scanning electron microscopy (FE-SEM) of benign and cancerous specimens showed marked differences in the tissue morphology, which further validates our observed experimental data with the PMCs. While these patient aspecific feasibility studies clearly demonstrate the ability to discriminate between benign and cancerous breast tissues, further investigation is necessary to perform automated mechano-phenotyping (classification) of breast cancer: from onset to disease progression. PMID:25128621

  6. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  7. The John Charnley Award: an accurate and sensitive method to separate, display, and characterize wear debris: part 1: polyethylene particles.

    PubMed

    Billi, Fabrizio; Benya, Paul; Kavanaugh, Aaron; Adams, John; Ebramzadeh, Edward; McKellop, Harry

    2012-02-01

    Numerous studies indicate highly crosslinked polyethylenes reduce the wear debris volume generated by hip arthroplasty acetabular liners. This, in turns, requires new methods to isolate and characterize them. We describe a method for extracting polyethylene wear particles from bovine serum typically used in wear tests and for characterizing their size, distribution, and morphology. Serum proteins were completely digested using an optimized enzymatic digestion method that prevented the loss of the smallest particles and minimized their clumping. Density-gradient ultracentrifugation was designed to remove contaminants and recover the particles without filtration, depositing them directly onto a silicon wafer. This provided uniform distribution of the particles and high contrast against the background, facilitating accurate, automated, morphometric image analysis. The accuracy and precision of the new protocol were assessed by recovering and characterizing particles from wear tests of three types of polyethylene acetabular cups (no crosslinking and 5 Mrads and 7.5 Mrads of gamma irradiation crosslinking). The new method demonstrated important differences in the particle size distributions and morphologic parameters among the three types of polyethylene that could not be detected using prior isolation methods. The new protocol overcomes a number of limitations, such as loss of nanometer-sized particles and artifactual clumping, among others. The analysis of polyethylene wear particles produced in joint simulator wear tests of prosthetic joints is a key tool to identify the wear mechanisms that produce the particles and predict and evaluate their effects on periprosthetic tissues.

  8. Accurate Characterization of the Pore Volume in Microporous Crystalline Materials

    PubMed Central

    2017-01-01

    Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. We show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms. PMID:28636815

  9. Accurate Characterization of the Pore Volume in Microporous Crystalline Materials

    DOE PAGES

    Ongari, Daniele; Boyd, Peter G.; Barthel, Senja; ...

    2017-06-21

    Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. Lasty, wemore » show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms.« less

  10. High-accurate optical fiber liquid level sensor

    NASA Astrophysics Data System (ADS)

    Sun, Dexing; Chen, Shouliu; Pan, Chao; Jin, Henghuan

    1991-08-01

    A highly accurate optical fiber liquid level sensor is presented. The single-chip microcomputer is used to process and control the signal. This kind of sensor is characterized by self-security and is explosion-proof, so it can be applied in any liquid level detecting areas, especially in the oil and chemical industries. The theories and experiments about how to improve the measurement accuracy are described. The relative error for detecting the measurement range 10 m is up to 0.01%.

  11. Determination of accurate vertical atmospheric profiles of extinction and turbulence

    NASA Astrophysics Data System (ADS)

    Hammel, Steve; Campbell, James; Hallenborg, Eric

    2017-09-01

    Our ability to generate an accurate vertical profile characterizing the atmosphere from the surface to a point above the boundary layer top is quite rudimentary. The region from a land or sea surface to an altitude of 3000 meters is dynamic and particularly important to the performance of many active optical systems. Accurate and agile instruments are necessary to provide measurements in various conditions, and models are needed to provide the framework and predictive capability necessary for system design and optimization. We introduce some of the path characterization instruments and describe the first work to calibrate and validate them. Along with a verification of measurement accuracy, the tests must also establish each instruments performance envelope. Measurement of these profiles in the field is a problem, and we will present a discussion of recent field test activity to address this issue. The Comprehensive Atmospheric Boundary Layer Extinction/Turbulence Resolution Analysis eXperiment (CABLE/TRAX) was conducted late June 2017. There were two distinct objectives for the experiment: 1) a comparison test of various scintillometers and transmissometers on a homogeneous horizontal path; 2) a vertical profile experiment. In this paper we discuss only the vertical profiling effort, and we describe the instruments that generated data for vertical profiles of absorption, scattering, and turbulence. These three profiles are the core requirements for an accurate assessment of laser beam propagation.

  12. A CASE STUDY ILLUSTRATING THE IMPORTANCE OF ACCURATE SITE CHARACTERIZATION

    EPA Science Inventory

    Too frequently, researchers rely on incomplete site characterization data to determine the placement of the sampling wells. They forget that it is these sampling wells that will be used to evaluate the effectiveness of their research efforts. This case study illustrates the eff...

  13. Archer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atzeni, Simone; Ahn, Dong; Gopalakrishnan, Ganesh

    2017-01-12

    Archer is built on top of the LLVM/Clang compilers that support OpenMP. It applies static and dynamic analysis techniques to detect data races in OpenMP programs generating a very low runtime and memory overhead. Static analyses identify data race free OpenMP regions and exclude them from runtime analysis, which is performed by ThreadSanitizer included in LLVM/Clang.

  14. Ultrasonic geometrical characterization of periodically corrugated surfaces.

    PubMed

    Liu, Jingfei; Declercq, Nico F

    2013-04-01

    Accurate characterization of the characteristic dimensions of a periodically corrugated surface using ultrasonic imaging technique is investigated both theoretically and experimentally. The possibility of accurately characterizing the characteristic dimensions is discussed. The condition for accurate characterization and the quantitative relationship between the accuracy and its determining parameters are given. The strategies to avoid diffraction effects instigated by the periodical nature of a corrugated surface are also discussed. Major causes of erroneous measurements are theoretically discussed and experimentally illustrated. A comparison is made between the presented results and the optical measurements, revealing acceptable agreement. This work realistically exposes the capability of the proposed ultrasonic technique to accurately characterize the lateral and vertical characteristic dimensions of corrugated surfaces. Both the general principles developed theoretically as well as the proposed practical techniques may serve as useful guidelines to peers. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. mm_par2.0: An object-oriented molecular dynamics simulation program parallelized using a hierarchical scheme with MPI and OPENMP

    NASA Astrophysics Data System (ADS)

    Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo

    2012-02-01

    decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU

  16. Waste Characterization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle

    2016-02-02

    This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.

  17. On a model of three-dimensional bursting and its parallel implementation

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Garzón, E. M.; Ramos, J. I.

    2008-04-01

    A mathematical model for the simulation of three-dimensional bursting phenomena and its parallel implementation are presented. The model consists of four nonlinearly coupled partial differential equations that include fast and slow variables, and exhibits bursting in the absence of diffusion. The differential equations have been discretized by means of a second-order accurate in both space and time, linearly-implicit finite difference method in equally-spaced grids. The resulting system of linear algebraic equations at each time level has been solved by means of the Preconditioned Conjugate Gradient (PCG) method. Three different parallel implementations of the proposed mathematical model have been developed; two of these implementations, i.e., the MPI and the PETSc codes, are based on a message passing paradigm, while the third one, i.e., the OpenMP code, is based on a shared space address paradigm. These three implementations are evaluated on two current high performance parallel architectures, i.e., a dual-processor cluster and a Shared Distributed Memory (SDM) system. A novel representation of the results that emphasizes the most relevant factors that affect the performance of the paralled implementations, is proposed. The comparative analysis of the computational results shows that the MPI and the OpenMP implementations are about twice more efficient than the PETSc code on the SDM system. It is also shown that, for the conditions reported here, the nonlinear dynamics of the three-dimensional bursting phenomena exhibits three stages characterized by asynchronous, synchronous and then asynchronous oscillations, before a quiescent state is reached. It is also shown that the fast system reaches steady state in much less time than the slow variables.

  18. CLOMP v1.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyllenhaal, J.

    CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading. For simplicity, it does not use MPI by default but it is expected to be run on the resources a threaded MPI task would use (e.g., a portion of a shared memory compute node). Compiling with -DWITH_MPI allows packing one or more nodes with CLOMP tasks and having CLOMP report OpenMP performance for the slowest MPI task. On current systems, the strong scaling performance results for 4, 8, or 16 threads are of the most interest. Suggested weakmore » scaling inputs are provided for evaluating future systems. Since MPI is often used to place at least one MPI task per coherence or NUMA domain, it is recommended to focus OpenMP runtime measurements on a subset of node hardware where it is most possible to have low OpenMP overheads (e.g., within one coherence domain or NUMA domain).« less

  19. Accurate phase measurements for thick spherical objects using optical quadrature microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2009-02-01

    In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.

  20. Argobots: A Lightweight Low-Level Threading and Tasking Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan

    In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, either are too specific to applications or architectures or are not as powerful or flexible. In this paper, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by end users or high-level programming models. We describe the design, implementation, and performance characterization of Argobots and present integrations with three high-level models: OpenMP, MPI, and colocated I/O services. Evaluations show that (1) Argobots, while providing richer capabilities, is competitive with existing simpler generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency-hiding capabilities; and (4) I/O services with Argobots reduce interference with colocated applications while achieving performance competitive with that of a Pthreads approach.« less

  1. Accurate means of detecting and characterizing abnormal patterns of ventricular activation by phase image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botvinick, E.H.; Frais, M.A.; Shosa, D.W.

    1982-08-01

    The ability of scintigraphic phase image analysis to characterize patterns of abnormal ventricular activation was investigated. The pattern of phase distribution and sequential phase changes over both right and left ventricular regions of interest were evaluated in 16 patients with normal electrical activation and wall motion and compared with those in 8 patients with an artificial pacemaker and 4 patients with sinus rhythm with the Wolff-Parkinson-White syndrome and delta waves. Normally, the site of earliest phase angle was seen at the base of the interventricular septum, with sequential change affecting the body of the septum and the cardiac apex andmore » then spreading laterally to involve the body of both ventricles. The site of earliest phase angle was located at the apex of the right ventricle in seven patients with a right ventricular endocardial pacemaker and on the lateral left ventricular wall in one patient with a left ventricular epicardial pacemaker. In each case the site corresponded exactly to the position of the pacing electrode as seen on posteroanterior and left lateral chest X-ray films, and sequential phase changes spread from the initial focus to affect both ventricles. In each of the patients with the Wolff-Parkinson-White syndrome, the site of earliest ventricular phase angle was located, and it corresponded exactly to the site of the bypass tract as determined by endocardial mapping. In this way, four bypass pathways, two posterior left paraseptal, one left lateral and one right lateral, were correctly localized scintigraphically. On the basis of the sequence of mechanical contraction, phase image analysis provides an accurate noninvasive method of detecting abnormal foci of ventricular activation.« less

  2. Funnel metadynamics as accurate binding free-energy method

    PubMed Central

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  3. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.

  4. An infrastructure for accurate characterization of single-event transients in digital circuits☆

    PubMed Central

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-01-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbara Chapman

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close tomore » DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.« less

  6. Clomp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gylenhaal, J.; Bronevetsky, G.

    2007-05-25

    CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less

  7. Accurate modeling and evaluation of microstructures in complex materials

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  8. A multi-threaded version of MCFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, John M.; Ellis, R. Keith; Giele, Walter T.

    We report on our findings modifying MCFM using OpenMP to implement multi-threading. By using OpenMP, the modified MCFM will execute on any processor, automatically adjusting to the number of available threads. We then modified the integration routine VEGAS to distribute the event evaluation over the threads, while combining all events at the end of every iteration to optimize the numerical integration. Furthermore, we took special care so that the results of the Monte Carlo integration were independent of the number of threads used, to facilitate the validation of the OpenMP version of MCFM.

  9. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGES

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  10. Tip Characterization Method using Multi-feature Characterizer for CD-AFM

    PubMed Central

    Orji, Ndubuisi G.; Itoh, Hiroshi; Wang, Chumei; Dixson, Ronald G.; Walecki, Peter S.; Schmidt, Sebastian W.; Irmer, Bernd

    2016-01-01

    In atomic force microscopy (AFM) metrology, the tip is a key source of uncertainty. Images taken with an AFM show a change in feature width and shape that depends on tip geometry. This geometric dilation is more pronounced when measuring features with high aspect ratios, and makes it difficult to obtain absolute dimensions. In order to accurately measure nanoscale features using an AFM, the tip dimensions should be known with a high degree of precision. We evaluate a new AFM tip characterizer, and apply it to critical dimension AFM (CD-AFM) tips used for high aspect ratio features. The characterizer is made up of comb-shaped lines and spaces, and includes a series of gratings that could be used as an integrated nanoscale length reference. We also demonstrate a simulation method that could be used to specify what range of tip sizes and shapes the characterizer can measure. Our experiments show that for non re-entrant features, the results obtained with this characterizer are consistent to 1 nm with the results obtained by using widely accepted but slower methods that are common practice in CD-AFM metrology. A validation of the integrated length standard using displacement interferometry indicates a uniformity of better than 0.75%, suggesting that the sample could be used as highly accurate and SI traceable lateral scale for the whole evaluation process. PMID:26720439

  11. Generating Accurate Urban Area Maps from Nighttime Satellite (DMSP/OLS) Data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc; Lawrence, William; Elvidge, Christopher

    2000-01-01

    There has been an increasing interest by the international research community to use the nighttime acquired "city-lights" data sets collected by the US Defense Meteorological Satellite Program's Operational Linescan system to study issues relative to urbanization. Many researchers are interested in using these data to estimate human demographic parameters over large areas and then characterize the interactions between urban development , natural ecosystems, and other aspects of the human enterprise. Many of these attempts rely on an ability to accurately identify urbanized area. However, beyond the simple determination of the loci of human activity, using these data to generate accurate estimates of urbanized area can be problematic. Sensor blooming and registration error can cause large overestimates of urban land based on a simple measure of lit area from the raw data. We discuss these issues, show results of an attempt to do a historical urban growth model in Egypt, and then describe a few basic processing techniques that use geo-spatial analysis to threshold the DMSP data to accurately estimate urbanized areas. Algorithm results are shown for the United States and an application to use the data to estimate the impact of urban sprawl on sustainable agriculture in the US and China is described.

  12. Characterization of condenser microphones under different environmental conditions for accurate speed of sound measurements with acoustic resonators.

    PubMed

    Guianvarc'h, Cécile; Gavioso, Roberto M; Benedetto, Giuliana; Pitre, Laurent; Bruneau, Michel

    2009-07-01

    Condenser microphones are more commonly used and have been extensively modeled and characterized in air at ambient temperature and static pressure. However, several applications of interest for metrology and physical acoustics require to use these transducers in significantly different environmental conditions. Particularly, the extremely accurate determination of the speed of sound in monoatomic gases, which is pursued for a determination of the Boltzmann constant k by an acoustic method, entails the use of condenser microphones mounted within a spherical cavity, over a wide range of static pressures, at the temperature of the triple point of water (273.16 K). To further increase the accuracy achievable in this application, the microphone frequency response and its acoustic input impedance need to be precisely determined over the same static pressure and temperature range. Few previous works examined the influence of static pressure, temperature, and gas composition on the microphone's sensitivity. In this work, the results of relative calibrations of 1/4 in. condenser microphones obtained using an electrostatic actuator technique are presented. The calibrations are performed in pure helium and argon gas at temperatures near 273 K and in the pressure range between 10 and 600 kPa. These experimental results are compared with the predictions of a realistic model available in the literature, finding a remarkable good agreement. The model provides an estimate of the acoustic impedance of 1/4 in. condenser microphones as a function of frequency and static pressure and is used to calculate the corresponding frequency perturbations induced on the normal modes of a spherical cavity when this is filled with helium or argon gas.

  13. Fast and Accurate Support Vector Machines on Large Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry

    Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminatemore » the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.« less

  14. The accurate assessment of small-angle X-ray scattering data

    DOE PAGES

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  15. Accurate characterization and understanding of interface trap density trends between atomic layer deposited dielectrics and AlGaN/GaN with bonding constraint theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanan, Narayanan; Lee, Bongmook; Misra, Veena, E-mail: vmisra@ncsu.edu

    2015-06-15

    Many dielectrics have been proposed for the gate stack or passivation of AlGaN/GaN based metal oxide semiconductor heterojunction field effect transistors, to reduce gate leakage and current collapse, both for power and RF applications. Atomic Layer Deposition (ALD) is preferred for dielectric deposition as it provides uniform, conformal, and high quality films with precise monolayer control of film thickness. Identification of the optimum ALD dielectric for the gate stack or passivation requires a critical investigation of traps created at the dielectric/AlGaN interface. In this work, a pulsed-IV traps characterization method has been used for accurate characterization of interface traps withmore » a variety of ALD dielectrics. High-k dielectrics (HfO{sub 2}, HfAlO, and Al{sub 2}O{sub 3}) are found to host a high density of interface traps with AlGaN. In contrast, ALD SiO{sub 2} shows the lowest interface trap density (<2 × 10{sup 12 }cm{sup −2}) after annealing above 600 °C in N{sub 2} for 60 s. The trend in observed trap densities is subsequently explained with bonding constraint theory, which predicts a high density of interface traps due to a higher coordination state and bond strain in high-k dielectrics.« less

  16. Accurate read-based metagenome characterization using a hierarchical suite of unique signatures

    PubMed Central

    Freitas, Tracey Allen K.; Li, Po-E; Scholz, Matthew B.; Chain, Patrick S. G.

    2015-01-01

    A major challenge in the field of shotgun metagenomics is the accurate identification of organisms present within a microbial community, based on classification of short sequence reads. Though existing microbial community profiling methods have attempted to rapidly classify the millions of reads output from modern sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, errors and biases in sequencing technologies, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here, we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling method with significantly and consistently smaller FDR than any other available method. Our algorithm circumvents false positives using a series of non-redundant signature databases and examines Genomic Origins Through Taxonomic CHAllenge (GOTTCHA). GOTTCHA was tested and validated on 20 synthetic and mock datasets ranging in community composition and complexity, was applied successfully to data generated from spiked environmental and clinical samples, and robustly demonstrates superior performance compared with other available tools. PMID:25765641

  17. The importance and attainment of accurate absolute radiometric calibration

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1984-01-01

    The importance of accurate absolute radiometric calibration is discussed by reference to the needs of those wishing to validate or use models describing the interaction of electromagnetic radiation with the atmosphere and earth surface features. The in-flight calibration methods used for the Landsat Thematic Mapper (TM) and the Systeme Probatoire d'Observation de la Terre, Haute Resolution visible (SPOT/HRV) systems are described and their limitations discussed. The questionable stability of in-flight absolute calibration methods suggests the use of a radiative transfer program to predict the apparent radiance, at the entrance pupil of the sensor, of a ground site of measured reflectance imaged through a well characterized atmosphere. The uncertainties of such a method are discussed.

  18. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  19. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  20. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  1. Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma

    PubMed Central

    Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.

    2013-01-01

    Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eosaccurately assigned only 41% of samples. Conclusion Despite statistically significant associations FeNO, IgE, blood Eos and Neu, FEV1%predicted, and age are poor surrogates, separately and combined, for accurately predicting sputum eosinophils and neutrophils. PMID:23706399

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  3. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  7. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  8. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    PubMed

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  9. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    PubMed Central

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  10. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  11. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    PubMed

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  12. An accurate and efficient experimental approach for characterization of the complex oral microbiota.

    PubMed

    Zheng, Wei; Tsompana, Maria; Ruscitto, Angela; Sharma, Ashu; Genco, Robert; Sun, Yijun; Buck, Michael J

    2015-10-05

    Currently, taxonomic interrogation of microbiota is based on amplification of 16S rRNA gene sequences in clinical and scientific settings. Accurate evaluation of the microbiota depends heavily on the primers used, and genus/species resolution bias can arise with amplification of non-representative genomic regions. The latest Illumina MiSeq sequencing chemistry has extended the read length to 300 bp, enabling deep profiling of large number of samples in a single paired-end reaction at a fraction of the cost. An increasingly large number of researchers have adopted this technology for various microbiome studies targeting the 16S rRNA V3-V4 hypervariable region. To expand the applicability of this powerful platform for further descriptive and functional microbiome studies, we standardized and tested an efficient, reliable, and straightforward workflow for the amplification, library construction, and sequencing of the 16S V1-V3 hypervariable region using the new 2 × 300 MiSeq platform. Our analysis involved 11 subgingival plaque samples from diabetic and non-diabetic human subjects suffering from periodontitis. The efficiency and reliability of our experimental protocol was compared to 16S V3-V4 sequencing data from the same samples. Comparisons were based on measures of observed taxonomic richness and species evenness, along with Procrustes analyses using beta(β)-diversity distance metrics. As an experimental control, we also analyzed a total of eight technical replicates for the V1-V3 and V3-V4 regions from a synthetic community with known bacterial species operon counts. We show that our experimental protocol accurately measures true bacterial community composition. Procrustes analyses based on unweighted UniFrac β-diversity metrics depicted significant correlation between oral bacterial composition for the V1-V3 and V3-V4 regions. However, measures of phylotype richness were higher for the V1-V3 region, suggesting that V1-V3 offers a deeper assessment of

  13. Nano-Scale Characterization of Al-Mg Nanocrystalline Alloys

    NASA Astrophysics Data System (ADS)

    Harvey, Evan; Ladani, Leila

    Materials with nano-scale microstructure have become increasingly popular due to their benefit of substantially increased strengths. The increase in strength as a result of decreasing grain size is defined by the Hall-Petch equation. With increased interest in miniaturization of components, methods of mechanical characterization of small volumes of material are necessary because traditional means such as tensile testing becomes increasingly difficult with such small test specimens. This study seeks to characterize elastic-plastic properties of nanocrystalline Al-5083 through nanoindentation and related data analysis techniques. By using nanoindentation, accurate predictions of the elastic modulus and hardness of the alloy were attained. Also, the employed data analysis model provided reasonable estimates of the plastic properties (strain-hardening exponent and yield stress) lending credibility to this procedure as an accurate, full mechanical characterization method.

  14. Characterization of Cloud Water-Content Distribution

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  15. Characterization of Thin Film Materials using SCAN meta-GGA, an Accurate Nonempirical Density Functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buda, I. G.; Lane, C.; Barbiellini, B.

    We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less

  16. Characterization of Thin Film Materials using SCAN meta-GGA, an Accurate Nonempirical Density Functional

    DOE PAGES

    Buda, I. G.; Lane, C.; Barbiellini, B.; ...

    2017-03-23

    We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less

  17. Characterization of Alaskan HMA mixtures with the simple performance tester.

    DOT National Transportation Integrated Search

    2014-05-01

    Material characterization provides basic and essential information for pavement design and the evaluation of hot mix asphalt (HMA). : This study focused on the accurate characterization of an Alaskan HMA mixture using an asphalt mixture performance t...

  18. Monte Carlo based investigation of berry phase for depth resolved characterization of biomedical scattering samples

    NASA Astrophysics Data System (ADS)

    Baba, J. S.; Koju, V.; John, D.

    2015-03-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  19. Wringing the last drop of optically stimulated luminescence response for accurate dating of glacial sediments

    NASA Astrophysics Data System (ADS)

    Medialdea, Alicia; Bateman, Mark D.; Evans, David J.; Roberts, David H.; Chiverrell, Richard C.; Clark, Chris D.

    2017-04-01

    BRITICE-CHRONO is a NERC-funded consortium project of more than 40 researchers aiming to establish the retreat patterns of the last British and Irish Ice Sheet. For this purpose, optically stimulated luminescence (OSL) dating, among other dating techniques, has been used in order to establish accurate chronology. More than 150 samples from glacial environments have been dated and provide key information for modelling of the ice retreat. Nevertheless, luminescence dating of glacial sediments has proven to be challenging: first, glacial sediments were often affected by incomplete bleaching and secondly, quartz grains within the sediments sampled were often characterized by complex luminescence behaviour; characterized by dim signal and low reproducibility. Specific statistical approaches have been used to over come the former to enable the estimated ages to be based on grain populations most likely to have been well bleached. This latest work presents how issues surrounding complex luminescence behaviour were over-come in order to obtain accurate OSL ages. This study has been performed on two samples of bedded sand originated on an ice walled lake plain, in Lincolnshire, UK. Quartz extracts from each sample were artificially bleached and irradiated to known doses. Dose recovery tests have been carried out under different conditions to study the effect of: preheat temperature, thermal quenching, contribution of slow components, hot bleach after a measuring cycles and IR stimulation. Measurements have been performed on different luminescence readers to study the possible contribution of instrument reproducibility. These have shown that a great variability can be observed not only among the studied samples but also within a specific site and even a specific sample. In order to determine an accurate chronology and realistic uncertainties to the estimated ages, this variability must be taken into account. Tight acceptance criteria to measured doses from natural, not

  20. Implementing the PM Programming Language using MPI and OpenMP - a New Tool for Programming Geophysical Models on Parallel Systems

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2015-04-01

    PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors

  1. High Spatial Resolution Commercial Satellite Imaging Product Characterization

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Pagnutti, Mary; Blonski, Slawomir; Ross, Kenton W.; Stnaley, Thomas

    2005-01-01

    NASA Stennis Space Center's Remote Sensing group has been characterizing privately owned high spatial resolution multispectral imaging systems, such as IKONOS, QuickBird, and OrbView-3. Natural and man made targets were used for spatial resolution, radiometric, and geopositional characterizations. Higher spatial resolution also presents significant adjacency effects for accurate reliable radiometry.

  2. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  3. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  4. How Accurate Are Transition States from Simulations of Enzymatic Reactions?

    PubMed Central

    2015-01-01

    The rate expression of traditional transition state theory (TST) assumes no recrossing of the transition state (TS) and thermal quasi-equilibrium between the ground state and the TS. Currently, it is not well understood to what extent these assumptions influence the nature of the activated complex obtained in traditional TST-based simulations of processes in the condensed phase in general and in enzymes in particular. Here we scrutinize these assumptions by characterizing the TSs for hydride transfer catalyzed by the enzyme Escherichia coli dihydrofolate reductase obtained using various simulation approaches. Specifically, we compare the TSs obtained with common TST-based methods and a dynamics-based method. Using a recently developed accurate hybrid quantum mechanics/molecular mechanics potential, we find that the TST-based and dynamics-based methods give considerably different TS ensembles. This discrepancy, which could be due equilibrium solvation effects and the nature of the reaction coordinate employed and its motion, raises major questions about how to interpret the TSs determined by common simulation methods. We conclude that further investigation is needed to characterize the impact of various TST assumptions on the TS phase-space ensemble and on the reaction kinetics. PMID:24860275

  5. An Accurate Absorption-Based Net Primary Production Model for the Global Ocean

    NASA Astrophysics Data System (ADS)

    Silsbe, G.; Westberry, T. K.; Behrenfeld, M. J.; Halsey, K.; Milligan, A.

    2016-02-01

    As a vital living link in the global carbon cycle, understanding how net primary production (NPP) varies through space, time, and across climatic oscillations (e.g. ENSO) is a key objective in oceanographic research. The continual improvement of ocean observing satellites and data analytics now present greater opportunities for advanced understanding and characterization of the factors regulating NPP. In particular, the emergence of spectral inversion algorithms now permits accurate retrievals of the phytoplankton absorption coefficient (aΦ) from space. As NPP is the efficiency in which absorbed energy is converted into carbon biomass, aΦ measurements circumvents chlorophyll-based empirical approaches by permitting direct and accurate measurements of phytoplankton energy absorption. It has long been recognized, and perhaps underappreciated, that NPP and phytoplankton growth rates display muted variability when normalized to aΦ rather than chlorophyll. Here we present a novel absorption-based NPP model that parameterizes the underlying physiological mechanisms behind this muted variability, and apply this physiological model to the global ocean. Through a comparison against field data from the Hawaii and Bermuda Ocean Time Series, we demonstrate how this approach yields more accurate NPP measurements than other published NPP models. By normalizing NPP to satellite estimates of phytoplankton carbon biomass, this presentation also explores the seasonality of phytoplankton growth rates across several oceanic regions. Finally, we discuss how future advances in remote-sensing (e.g. hyperspectral satellites, LIDAR, autonomous profilers) can be exploited to further improve absorption-based NPP models.

  6. Accurate quantitation standards of glutathione via traceable sulfur measurement by inductively coupled plasma optical emission spectrometry and ion chromatography

    PubMed Central

    Rastogi, L.; Dash, K.; Arunachalam, J.

    2013-01-01

    The quantitative analysis of glutathione (GSH) is important in different fields like medicine, biology, and biotechnology. Accurate quantitative measurements of this analyte have been hampered by the lack of well characterized reference standards. The proposed procedure is intended to provide an accurate and definitive method for the quantitation of GSH for reference measurements. Measurement of the stoichiometrically existing sulfur content in purified GSH offers an approach for its quantitation and calibration through an appropriate characterized reference material (CRM) for sulfur would provide a methodology for the certification of GSH quantity, that is traceable to SI (International system of units). The inductively coupled plasma optical emission spectrometry (ICP-OES) approach negates the need for any sample digestion. The sulfur content of the purified GSH is quantitatively converted into sulfate ions by microwave-assisted UV digestion in the presence of hydrogen peroxide prior to ion chromatography (IC) measurements. The measurement of sulfur by ICP-OES and IC (as sulfate) using the “high performance” methodology could be useful for characterizing primary calibration standards and certified reference materials with low uncertainties. The relative expanded uncertainties (% U) expressed at 95% confidence interval for ICP-OES analyses varied from 0.1% to 0.3%, while in the case of IC, they were between 0.2% and 1.2%. The described methods are more suitable for characterizing primary calibration standards and certifying reference materials of GSH, than for routine measurements. PMID:29403814

  7. Characterization of UMT2013 Performance on Advanced Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, Louis

    2014-12-31

    This paper presents part of a larger effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. The focus here is on UMT2013, a proxy implementation of deterministic transport for unstructured meshes. I present weak and strong MPI scaling results and studies of OpenMP efficiency on the Sequoia BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. The hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while informationmore » from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Preliminary tests that exploit NVRAM as extended memory on an Ivy Bridge machine designed for “Big Data” applications are also included.« less

  8. Gravity Field Characterization around Small Bodies

    NASA Astrophysics Data System (ADS)

    Takahashi, Yu

    A small body rendezvous mission requires accurate gravity field characterization for safe, accurate navigation purposes. However, the current techniques of gravity field modeling around small bodies are not achieved to the level of satisfaction. This thesis will address how the process of current gravity field characterization can be made more robust for future small body missions. First we perform the covariance analysis around small bodies via multiple slow flybys. Flyby characterization requires less laborious scheduling than its orbit counterpart, simultaneously reducing the risk of impact into the asteroid's surface. It will be shown that the level of initial characterization that can occur with this approach is no less than the orbit approach. Next, we apply the same technique of gravity field characterization to estimate the spin state of 4179 Touatis, which is a near-Earth asteroid in close to 4:1 resonance with the Earth. The data accumulated from 1992-2008 are processed in a least-squares filter to predict Toutatis' orientation during the 2012 apparition. The center-of-mass offset and the moments of inertia estimated thereof can be used to constrain the internal density distribution within the body. Then, the spin state estimation is developed to a generalized method to estimate the internal density distribution within a small body. The density distribution is estimated from the orbit determination solution of the gravitational coefficients. It will be shown that the surface gravity field reconstructed from the estimated density distribution yields higher accuracy than the conventional gravity field models. Finally, we will investigate two types of relatively unknown gravity fields, namely the interior gravity field and interior spherical Bessel gravity field, in order to investigate how accurately the surface gravity field can be mapped out for proximity operations purposes. It will be shown that these formulations compute the surface gravity field with

  9. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, andmore » its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.« less

  10. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  11. MICCA: a complete and accurate software for taxonomic profiling of metagenomic data.

    PubMed

    Albanese, Davide; Fontana, Paolo; De Filippo, Carlotta; Cavalieri, Duccio; Donati, Claudio

    2015-05-19

    The introduction of high throughput sequencing technologies has triggered an increase of the number of studies in which the microbiota of environmental and human samples is characterized through the sequencing of selected marker genes. While experimental protocols have undergone a process of standardization that makes them accessible to a large community of scientist, standard and robust data analysis pipelines are still lacking. Here we introduce MICCA, a software pipeline for the processing of amplicon metagenomic datasets that efficiently combines quality filtering, clustering of Operational Taxonomic Units (OTUs), taxonomy assignment and phylogenetic tree inference. MICCA provides accurate results reaching a good compromise among modularity and usability. Moreover, we introduce a de-novo clustering algorithm specifically designed for the inference of Operational Taxonomic Units (OTUs). Tests on real and synthetic datasets shows that thanks to the optimized reads filtering process and to the new clustering algorithm, MICCA provides estimates of the number of OTUs and of other common ecological indices that are more accurate and robust than currently available pipelines. Analysis of public metagenomic datasets shows that the higher consistency of results improves our understanding of the structure of environmental and human associated microbial communities. MICCA is an open source project.

  12. Accurate Characterization of Rain Drop Size Distribution Using Meteorological Particle Spectrometer and 2D Video Disdrometer for Propagation and Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Thurai, Merhala; Bringi, Viswanathan; Kennedy, Patrick; Notaros, Branislav; Gatlin, Patrick

    2017-01-01

    Accurate measurements of rain drop size distributions (DSD), with particular emphasis on small and tiny drops, are presented. Measurements were conducted in two very different climate regions, namely Northern Colorado and Northern Alabama. Both datasets reveal a combination of (i) a drizzle mode for drop diameters less than 0.7 mm and (ii) a precipitation mode for larger diameters. Scattering calculations using the DSDs are performed at S and X bands and compared with radar observations for the first location. Our accurate DSDs will improve radar-based rain rate estimates as well as propagation predictions.

  13. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  14. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less

  15. Characterization of a signal recording system for accurate velocity estimation using a VISAR

    NASA Astrophysics Data System (ADS)

    Rav, Amit; Joshi, K. D.; Singh, Kulbhushan; Kaushik, T. C.

    2018-02-01

    The linearity of a signal recording system (SRS) in time as well as in amplitude are important for the accurate estimation of the free surface velocity history of a moving target during shock loading and unloading when measured using optical interferometers such as a velocity interferometer system for any reflector (VISAR). Signal recording being the first step in a long sequence of signal processes, the incorporation of errors due to nonlinearity, and low signal-to-noise ratio (SNR) affects the overall accuracy and precision of the estimation of velocity history. In shock experiments the small duration (a few µs) of loading/unloading, the reflectivity of moving target surface, and the properties of optical components, control the amount of input of light to the SRS of a VISAR and this in turn affects the linearity and SNR of the overall measurement. These factors make it essential to develop in situ procedures for (i) minimizing the effect of signal induced noise and (ii) determine the linear region of operation for the SRS. Here we report on a procedure for the optimization of SRS parameters such as photodetector gain, optical power, aperture etc, so as to achieve a linear region of operation with a high SNR. The linear region of operation so determined has been utilized successfully to estimate the temporal history of the free surface velocity of the moving target in shock experiments.

  16. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    PubMed

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.

  17. Accurate prediction of X-ray pulse properties from a free-electron laser using machine learning

    DOE PAGES

    Sanchez-Gonzalez, A.; Micaelli, P.; Olivier, C.; ...

    2017-06-05

    Free-electron lasers providing ultra-short high-brightness pulses of X-ray radiation have great potential for a wide impact on science, and are a critical element for unravelling the structural dynamics of matter. To fully harness this potential, we must accurately know the X-ray properties: intensity, spectrum and temporal profile. Owing to the inherent fluctuations in free-electron lasers, this mandates a full characterization of the properties for each and every pulse. While diagnostics of these properties exist, they are often invasive and many cannot operate at a high-repetition rate. Here, we present a technique for circumventing this limitation. Employing a machine learning strategy,more » we can accurately predict X-ray properties for every shot using only parameters that are easily recorded at high-repetition rate, by training a model on a small set of fully diagnosed pulses. Lastly, this opens the door to fully realizing the promise of next-generation high-repetition rate X-ray lasers.« less

  18. Accurate prediction of X-ray pulse properties from a free-electron laser using machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Gonzalez, A.; Micaelli, P.; Olivier, C.

    Free-electron lasers providing ultra-short high-brightness pulses of X-ray radiation have great potential for a wide impact on science, and are a critical element for unravelling the structural dynamics of matter. To fully harness this potential, we must accurately know the X-ray properties: intensity, spectrum and temporal profile. Owing to the inherent fluctuations in free-electron lasers, this mandates a full characterization of the properties for each and every pulse. While diagnostics of these properties exist, they are often invasive and many cannot operate at a high-repetition rate. Here, we present a technique for circumventing this limitation. Employing a machine learning strategy,more » we can accurately predict X-ray properties for every shot using only parameters that are easily recorded at high-repetition rate, by training a model on a small set of fully diagnosed pulses. Lastly, this opens the door to fully realizing the promise of next-generation high-repetition rate X-ray lasers.« less

  19. Stability characterization of two multi-channel GPS receivers for accurate frequency transfer.

    NASA Astrophysics Data System (ADS)

    Taris, F.; Uhrich, P.; Thomas, C.; Petit, G.; Jiang, Z.

    In recent years, wide-spread use of the GPS common-view technique has led to major improvements, making it possible to compare remote clocks at their full level of performance. For integration times of 1 to 3 days, their frequency differences are consistently measured to about one part in 1014. Recent developments in atomic frequency standards suggest, however, that this performance may no longer be sufficient. The caesium fountain LPTF FO1, built at the BNM-LPTF, Paris, France, shows a short-term white frequency noise characterized by an Allen deviation σy(τ = 1 s) = 5×10-14 and a type B uncertainty of 2×10-15. To compare the frequencies of such highly stable standards would call for GPS common-view results to be averaged over times far exceeding the intervals of their optimal performance. Previous studies have shown the potential of carrier-phase and code measurements from geodetic GPS receivers for clock frequency comparisons. The experiment related here is an attempt to see the stability limit that could be reached using this technique.

  20. Medium Spatial Resolution Satellite Characterization

    NASA Technical Reports Server (NTRS)

    Stensaas, Greg

    2007-01-01

    This project provides characterization and calibration of aerial and satellite systems in support of quality acquisition and understanding of remote sensing data, and verifies and validates the associated data products with respect to ground and and atmospheric truth so that accurate value-added science can be performed. The project also provides assessment of new remote sensing technologies.

  1. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    NASA Astrophysics Data System (ADS)

    Hristov, Ivan; Goranov, Goran; Hristova, Radoslava

    2018-02-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.

  2. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  3. MICCA: a complete and accurate software for taxonomic profiling of metagenomic data

    PubMed Central

    Albanese, Davide; Fontana, Paolo; De Filippo, Carlotta; Cavalieri, Duccio; Donati, Claudio

    2015-01-01

    The introduction of high throughput sequencing technologies has triggered an increase of the number of studies in which the microbiota of environmental and human samples is characterized through the sequencing of selected marker genes. While experimental protocols have undergone a process of standardization that makes them accessible to a large community of scientist, standard and robust data analysis pipelines are still lacking. Here we introduce MICCA, a software pipeline for the processing of amplicon metagenomic datasets that efficiently combines quality filtering, clustering of Operational Taxonomic Units (OTUs), taxonomy assignment and phylogenetic tree inference. MICCA provides accurate results reaching a good compromise among modularity and usability. Moreover, we introduce a de-novo clustering algorithm specifically designed for the inference of Operational Taxonomic Units (OTUs). Tests on real and synthetic datasets shows that thanks to the optimized reads filtering process and to the new clustering algorithm, MICCA provides estimates of the number of OTUs and of other common ecological indices that are more accurate and robust than currently available pipelines. Analysis of public metagenomic datasets shows that the higher consistency of results improves our understanding of the structure of environmental and human associated microbial communities. MICCA is an open source project. PMID:25988396

  4. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... In the Matter of Accurate NDE & Docket: 150-00017, General Inspection, LLC Broussard, Louisiana... an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28, 2011, the NRC and Accurate NDE...

  5. Multilevel Parallelization of AutoDock 4.2.

    PubMed

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  6. OSM-Classic : An optical imaging technique for accurately determining strain

    NASA Astrophysics Data System (ADS)

    Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.

    OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.

  7. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  8. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  9. SAXS Combined with UV-vis Spectroscopy and QELS: Accurate Characterization of Silver Sols Synthesized in Polymer Matrices.

    PubMed

    Bulavin, Leonid; Kutsevol, Nataliya; Chumachenko, Vasyl; Soloviov, Dmytro; Kuklin, Alexander; Marynin, Andrii

    2016-12-01

    The present work demonstrates a validation of small-angle X-ray scattering (SAXS) combining with ultra violet and visible (UV-vis) spectroscopy and quasi-elastic light scattering (QELS) analysis for characterization of silver sols synthesized in polymer matrices. Polymer matrix internal structure and polymer chemical nature actually controlled the sol size characteristics. It was shown that for precise analysis of nanoparticle size distribution these techniques should be used simultaneously. All applied methods were in good agreement for the characterization of size distribution of small particles (less than 60 nm) in the sols. Some deviations of the theoretical curves from the experimental ones were observed. The most probable cause is that nanoparticles were not entirely spherical in form.

  10. Juneau Airport Doppler Lidar Deployment: Extraction of Accurate Turbulent Wind Statistics

    NASA Technical Reports Server (NTRS)

    Hannon, Stephen M.; Frehlich, Rod; Cornman, Larry; Goodrich, Robert; Norris, Douglas; Williams, John

    1999-01-01

    A 2 micrometer pulsed Doppler lidar was deployed to the Juneau Airport in 1998 to measure turbulence and wind shear in and around the departure and arrival corridors. The primary objective of the measurement program was to demonstrate and evaluate the capability of a pulsed coherent lidar to remotely and unambiguously measure wind turbulence. Lidar measurements were coordinated with flights of an instrumented research aircraft operated by representatives of the University of North Dakota (UND) under the direction of the National Center for Atmospheric Research (NCAR). The data collected is expected to aid both turbulence characterization as well as airborne turbulence detection algorithm development activities within NASA and the FAA. This paper presents a summary of the deployment and results of analysis and simulation which address important issues regarding the measurement requirements for accurate turbulent wind statistics extraction.

  11. Steady-state low thermal resistance characterization apparatus: The bulk thermal tester

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burg, Brian R.; Kolly, Manuel; Blasakis, Nicolas

    The reliability of microelectronic devices is largely dependent on electronic packaging, which includes heat removal. The appropriate packaging design therefore necessitates precise knowledge of the relevant material properties, including thermal resistance and thermal conductivity. Thin materials and high conductivity layers make their thermal characterization challenging. A steady state measurement technique is presented and evaluated with the purpose to characterize samples with a thermal resistance below 100 mm{sup 2} K/W. It is based on the heat flow meter bar approach made up by two copper blocks and relies exclusively on temperature measurements from thermocouples. The importance of thermocouple calibration is emphasizedmore » in order to obtain accurate temperature readings. An in depth error analysis, based on Gaussian error propagation, is carried out. An error sensitivity analysis highlights the importance of the precise knowledge of the thermal interface materials required for the measurements. Reference measurements on Mo samples reveal a measurement uncertainty in the range of 5% and most accurate measurements are obtained at high heat fluxes. Measurement techniques for homogeneous bulk samples, layered materials, and protruding cavity samples are discussed. Ultimately, a comprehensive overview of a steady state thermal characterization technique is provided, evaluating the accuracy of sample measurements with thermal resistances well below state of the art setups. Accurate characterization of materials used in heat removal applications, such as electronic packaging, will enable more efficient designs and ultimately contribute to energy savings.« less

  12. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such

  13. A model-updating procedure to stimulate piezoelectric transducers accurately.

    PubMed

    Piranda, B; Ballandras, S; Steichen, W; Hecart, B

    2001-09-01

    The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.

  14. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  15. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Magnetic resonance imaging-transectal ultrasound image-fusion biopsies accurately characterize the index tumor: correlation with step-sectioned radical prostatectomy specimens in 135 patients.

    PubMed

    Baco, Eduard; Ukimura, Osamu; Rud, Erik; Vlatkovic, Ljiljana; Svindland, Aud; Aron, Manju; Palmer, Suzanne; Matsugasumi, Toru; Marien, Arnaud; Bernhard, Jean-Christophe; Rewcastle, John C; Eggesbø, Heidi B; Gill, Inderbir S

    2015-04-01

    Prostate biopsies targeted by elastic fusion of magnetic resonance (MR) and three-dimensional (3D) transrectal ultrasound (TRUS) images may allow accurate identification of the index tumor (IT), defined as the lesion with the highest Gleason score or the largest volume or extraprostatic extension. To determine the accuracy of MR-TRUS image-fusion biopsy in characterizing ITs, as confirmed by correlation with step-sectioned radical prostatectomy (RP) specimens. Retrospective analysis of 135 consecutive patients who sequentially underwent pre-biopsy MR, MR-TRUS image-fusion biopsy, and robotic RP at two centers between January 2010 and September 2013. Image-guided biopsies of MR-suspected IT lesions were performed with tracking via real-time 3D TRUS. The largest geographically distinct cancer focus (IT lesion) was independently registered on step-sectioned RP specimens. A validated schema comprising 27 regions of interest was used to identify the IT center location on MR images and in RP specimens, as well as the location of the midpoint of the biopsy trajectory, and variables were correlated. The concordance between IT location on biopsy and RP specimens was 95% (128/135). The coefficient for correlation between IT volume on MRI and histology was r=0.663 (p<0.001). The maximum cancer core length on biopsy was weakly correlated with RP tumor volume (r=0.466, p<0.001). The concordance of primary Gleason pattern between targeted biopsy and RP specimens was 90% (115/128; κ=0.76). The study limitations include retrospective evaluation of a selected patient population, which limits the generalizability of the results. Use of MR-TRUS image fusion to guide prostate biopsies reliably identified the location and primary Gleason pattern of the IT lesion in >90% of patients, but showed limited ability to predict cancer volume, as confirmed by step-sectioned RP specimens. Biopsies targeted using magnetic resonance images combined with real-time three-dimensional transrectal

  17. Methods for characterizing convective cryoprobe heat transfer in ultrasound gel phantoms.

    PubMed

    Etheridge, Michael L; Choi, Jeunghwan; Ramadhyani, Satish; Bischof, John C

    2013-02-01

    While cryosurgery has proven capable in treating of a variety of conditions, it has met with some resistance among physicians, in part due to shortcomings in the ability to predict treatment outcomes. Here we attempt to address several key issues related to predictive modeling by demonstrating methods for accurately characterizing heat transfer from cryoprobes, report temperature dependent thermal properties for ultrasound gel (a convenient tissue phantom) down to cryogenic temperatures, and demonstrate the ability of convective exchange heat transfer boundary conditions to accurately describe freezing in the case of single and multiple interacting cryoprobe(s). Temperature dependent changes in the specific heat and thermal conductivity for ultrasound gel are reported down to -150 °C for the first time here and these data were used to accurately describe freezing in ultrasound gel in subsequent modeling. Freezing around a single and two interacting cryoprobe(s) was characterized in the ultrasound gel phantom by mapping the temperature in and around the "iceball" with carefully placed thermocouple arrays. These experimental data were fit with finite-element modeling in COMSOL Multiphysics, which was used to investigate the sensitivity and effectiveness of convective boundary conditions in describing heat transfer from the cryoprobes. Heat transfer at the probe tip was described in terms of a convective coefficient and the cryogen temperature. While model accuracy depended strongly on spatial (i.e., along the exchange surface) variation in the convective coefficient, it was much less sensitive to spatial and transient variations in the cryogen temperature parameter. The optimized fit, convective exchange conditions for the single-probe case also provided close agreement with the experimental data for the case of two interacting cryoprobes, suggesting that this basic characterization and modeling approach can be extended to accurately describe more complicated

  18. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  19. GlycoDeNovo - an Efficient Algorithm for Accurate de novo Glycan Topology Reconstruction from Tandem Mass Spectra

    NASA Astrophysics Data System (ADS)

    Hong, Pengyu; Sun, Hui; Sha, Long; Pu, Yi; Khatri, Kshitij; Yu, Xiang; Tang, Yang; Lin, Cheng

    2017-08-01

    A major challenge in glycomics is the characterization of complex glycan structures that are essential for understanding their diverse roles in many biological processes. We present a novel efficient computational approach, named GlycoDeNovo, for accurate elucidation of the glycan topologies from their tandem mass spectra. Given a spectrum, GlycoDeNovo first builds an interpretation-graph specifying how to interpret each peak using preceding interpreted peaks. It then reconstructs the topologies of peaks that contribute to interpreting the precursor ion. We theoretically prove that GlycoDeNovo is highly efficient. A major innovative feature added to GlycoDeNovo is a data-driven IonClassifier which can be used to effectively rank candidate topologies. IonClassifier is automatically learned from experimental spectra of known glycans to distinguish B- and C-type ions from all other ion types. Our results showed that GlycoDeNovo is robust and accurate for topology reconstruction of glycans from their tandem mass spectra. [Figure not available: see fulltext.

  20. Characterization of photo-transformation products of the antibiotic drug Ciprofloxacin with liquid chromatography-tandem mass spectrometry in combination with accurate mass determination using an LTQ-Orbitrap.

    PubMed

    Haddad, Tarek; Kümmerer, Klaus

    2014-11-01

    The presence of pharmaceuticals, especially antibiotics, in the aquatic environment is of growing concern. Several studies have been carried out on the occurrence and environmental risk of these compounds. Ciprofloxacin (CIP), a broad-spectrum anti-microbial second-generation fluoroquinolone, is widely used in human and veterinary medicine. In this work, photo-degradation of CIP in aqueous solution using UV and xenon lamps was studied. The transformation products (TPs), created from CIP, were initially analyzed by an ion trap in the MS, MS/MS and MS(3) modes. These data were used to clarify the structures of the degradation products. Furthermore, the proposed products were confirmed by accurate mass measurement and empirical formula calculation for the molecular ions of TPs using LTQ-Orbitrap XL mass spectrometer. The degree of mineralization, the abundance of detected TPs and degradation pathways were determined. Eleven TPs were detected in the present study. TP1, which was never detected before, was structurally characterized in this work. All TPs still retained the core quinolone structure, which is responsible for the biological activity. As mineralization of CIP and its transformation products did not happen, the formation of stable TPs can be expected in waste water treatment and in surface water with further follow-up problems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Characterization of xenon ion and neutral interactions in a well-characterized experiment

    NASA Astrophysics Data System (ADS)

    Patino, Marlene I.; Wirz, Richard E.

    2018-06-01

    Interactions between fast ions and slow neutral atoms are commonly dominated by charge-exchange and momentum-exchange collisions, which are important to understanding and simulating the performance and behavior of many plasma devices. To investigate these interactions, this work developed a simple, well-characterized experiment that accurately measures the behavior of high energy xenon ions incident on a background of xenon neutral atoms. By using well-defined operating conditions and a simple geometry, these results serve as canonical data for the development and validation of plasma models and models of neutral beam sources that need to ensure accurate treatment of angular scattering distributions of charge-exchange and momentum-exchange ions and neutrals. The energies used in this study are relevant for electric propulsion devices ˜1.5 keV and can be used to improve models of ion-neutral interactions in the plume. By comparing these results to both analytical and computational models of ion-neutral interactions, we discovered the importance of (1) accurately treating the differential cross-sections for momentum-exchange and charge-exchange collisions over a large range of neutral background pressures and (2) properly considering commonly overlooked interactions, such as ion-induced electron emission from nearby surfaces and neutral-neutral ionization collisions.

  2. Wavelength-modulated differential photoacoustic radar imager (WM-DPARI): accurate monitoring of absolute hemoglobin oxygen saturation

    PubMed Central

    Choi, Sung Soo Sean; Lashkari, Bahman; Dovlo, Edem; Mandelis, Andreas

    2016-01-01

    Accurate monitoring of blood oxy-saturation level (SO2) in human breast tissues is clinically important for predicting and evaluating possible tumor growth at the site. In this work, four different non-invasive frequency-domain photoacoustic (PA) imaging modalities were compared for their absolute SO2 characterization capability using an in-vitro sheep blood circulation system. Among different PA modes, a new WM-DPAR imaging modality could estimate the SO2 with great accuracy when compared to a commercial blood gas analyzer. The developed WM-DPARI theory was further validated by constructing SO2 tomographic images of a blood-containing plastisol phantom. PMID:27446691

  3. High strain-rate soft material characterization via inertial cavitation

    NASA Astrophysics Data System (ADS)

    Estrada, Jonathan B.; Barajas, Carlos; Henann, David L.; Johnsen, Eric; Franck, Christian

    2018-03-01

    Mechanical characterization of soft materials at high strain-rates is challenging due to their high compliance, slow wave speeds, and non-linear viscoelasticity. Yet, knowledge of their material behavior is paramount across a spectrum of biological and engineering applications from minimizing tissue damage in ultrasound and laser surgeries to diagnosing and mitigating impact injuries. To address this significant experimental hurdle and the need to accurately measure the viscoelastic properties of soft materials at high strain-rates (103-108 s-1), we present a minimally invasive, local 3D microrheology technique based on inertial microcavitation. By combining high-speed time-lapse imaging with an appropriate theoretical cavitation framework, we demonstrate that this technique has the capability to accurately determine the general viscoelastic material properties of soft matter as compliant as a few kilopascals. Similar to commercial characterization algorithms, we provide the user with significant flexibility in evaluating several constitutive laws to determine the most appropriate physical model for the material under investigation. Given its straightforward implementation into most current microscopy setups, we anticipate that this technique can be easily adopted by anyone interested in characterizing soft material properties at high loading rates including hydrogels, tissues and various polymeric specimens.

  4. Atomic force microscopy characterization of cellulose nanocrystals

    Treesearch

    Roya R. Lahiji; Xin Xu; Ronald Reifenberger; Arvind Raman; Alan Rudie; Robert J. Moon

    2010-01-01

    Cellulose nanocrystals (CNCs) are gaining interest as a “green” nanomaterial with superior mechanical and chemical properties for high-performance nanocomposite materials; however, there is a lack of accurate material property characterization of individual CNCs. Here, a detailed study of the topography, elastic and adhesive properties of individual wood-derived CNCs...

  5. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  6. Accurate mass measurement: terminology and treatment of data.

    PubMed

    Brenton, A Gareth; Godfrey, A Ruth

    2010-11-01

    High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.

  7. Importance of geologic characterization of potential low-level radioactive waste disposal sites

    USGS Publications Warehouse

    Weibel, C.P.; Berg, R.C.

    1991-01-01

    Using the example of the Geff Alternative Site in Wayne County, Illinois, for the disposal of low-level radioactive waste, this paper demonstrates, from a policy and public opinion perspective, the importance of accurately determining site stratigraphy. Complete and accurate characterization of geologic materials and determination of site stratigraphy at potential low-level waste disposal sites provides the frame-work for subsequent hydrologic and geochemical investigations. Proper geologic characterization is critical to determine the long-term site stability and the extent of interactions of groundwater between the site and its surroundings. Failure to adequately characterize site stratigraphy can lead to the incorrect evaluation of the geology of a site, which in turn may result in a lack of public confidence. A potential problem of lack of public confidence was alleviated as a result of the resolution and proper definition of the Geff Alternative Site stratigraphy. The integrity of the investigation was not questioned and public perception was not compromised. ?? 1991 Springer-Verlag New York Inc.

  8. OCCIMA: Optical Channel Characterization in Maritime Atmospheres

    NASA Astrophysics Data System (ADS)

    Hammel, Steve; Tsintikidis, Dimitri; deGrassie, John; Reinhardt, Colin; McBryde, Kevin; Hallenborg, Eric; Wayne, David; Gibson, Kristofor; Cauble, Galen; Ascencio, Ana; Rudiger, Joshua

    2015-05-01

    The Navy is actively developing diverse optical application areas, including high-energy laser weapons and free- space optical communications, which depend on an accurate and timely knowledge of the state of the atmospheric channel. The Optical Channel Characterization in Maritime Atmospheres (OCCIMA) project is a comprehensive program to coalesce and extend the current capability to characterize the maritime atmosphere for all optical and infrared wavelengths. The program goal is the development of a unified and validated analysis toolbox. The foundational design for this program coordinates the development of sensors, measurement protocols, analytical models, and basic physics necessary to fulfill this goal.

  9. Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation

    PubMed Central

    Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392

  10. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    PubMed

    Ramkissoon, Kevin R; Miller, Jennifer K; Ojha, Sunil; Watson, Douglas S; Bomar, Martha G; Galande, Amit K; Shearer, Alexander G

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  11. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  12. Method for accurate determination of dissociation constants of optical ratiometric systems: chemical probes, genetically encoded sensors, and interacting molecules.

    PubMed

    Pomorski, Adam; Kochańczyk, Tomasz; Miłoch, Anna; Krężel, Artur

    2013-12-03

    Ratiometric chemical probes and genetically encoded sensors are of high interest for both analytical chemists and molecular biologists. Their high sensitivity toward the target ligand and ability to obtain quantitative results without a known sensor concentration have made them a very useful tool in both in vitro and in vivo assays. Although ratiometric sensors are widely used in many applications, their successful and accurate usage depends on how they are characterized in terms of sensing target molecules. The most important feature of probes and sensors besides their optical parameters is an affinity constant toward analyzed molecules. The literature shows that different analytical approaches are used to determine the stability constants, with the ratio approach being most popular. However, oversimplification and lack of attention to detail results in inaccurate determination of stability constants, which in turn affects the results obtained using these sensors. Here, we present a new method where ratio signal is calibrated for borderline values of intensities of both wavelengths, instead of borderline ratio values that generate errors in many studies. At the same time, the equation takes into account the cooperativity factor or fluorescence artifacts and therefore can be used to characterize systems with various stoichiometries and experimental conditions. Accurate determination of stability constants is demonstrated utilizing four known optical ratiometric probes and sensors, together with a discussion regarding other, currently used methods.

  13. Calibrating GPS With TWSTFT For Accurate Time Transfer

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  14. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  15. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  16. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  17. Characterization of photomultiplier tubes with a realistic model through GPU-boosted simulation

    NASA Astrophysics Data System (ADS)

    Anthony, M.; Aprile, E.; Grandi, L.; Lin, Q.; Saldanha, R.

    2018-02-01

    The accurate characterization of a photomultiplier tube (PMT) is crucial in a wide-variety of applications. However, current methods do not give fully accurate representations of the response of a PMT, especially at very low light levels. In this work, we present a new and more realistic model of the response of a PMT, called the cascade model, and use it to characterize two different PMTs at various voltages and light levels. The cascade model is shown to outperform the more common Gaussian model in almost all circumstances and to agree well with a newly introduced model independent approach. The technical and computational challenges of this model are also presented along with the employed solution of developing a robust GPU-based analysis framework for this and other non-analytical models.

  18. Testing New Programming Paradigms with NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  19. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need

  20. Enhanced Characterization of Niobium Surface Topography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Xu, Hui Tian, Charles Reece, Michael Kelley

    2011-12-01

    Surface topography characterization is a continuing issue for the Superconducting Radio Frequency (SRF) particle accelerator community. Efforts are underway to both to improve surface topography, and its characterization and analysis using various techniques. In measurement of topography, Power Spectral Density (PSD) is a promising method to quantify typical surface parameters and develop scale-specific interpretations. PSD can also be used to indicate how chemical processes modifiesy the roughnesstopography at different scales. However, generating an accurate and meaningful topographic PSD of an SRF surface requires careful analysis and optimization. In this report, polycrystalline surfaces with different process histories are sampled with AFMmore » and stylus/white light interferometer profilometryers and analyzed to indicate trace topography evolution at different scales. evolving during etching or polishing. Moreover, Aan optimized PSD analysis protocol will be offered to serve the SRF surface characterization needs is presented.« less

  1. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  2. An Accurate Mass Determination for Kepler-1655b, a Moderately Irradiated World with a Significant Volatile Envelope

    NASA Astrophysics Data System (ADS)

    Haywood, Raphaëlle D.; Vanderburg, Andrew; Mortier, Annelies; Giles, Helen A. C.; López-Morales, Mercedes; Lopez, Eric D.; Malavolta, Luca; Charbonneau, David; Collier Cameron, Andrew; Coughlin, Jeffrey L.; Dressing, Courtney D.; Nava, Chantanelle; Latham, David W.; Dumusque, Xavier; Lovis, Christophe; Molinari, Emilio; Pepe, Francesco; Sozzetti, Alessandro; Udry, Stéphane; Bouchy, François; Johnson, John A.; Mayor, Michel; Micela, Giusi; Phillips, David; Piotto, Giampaolo; Rice, Ken; Sasselov, Dimitar; Ségransan, Damien; Watson, Chris; Affer, Laura; Bonomo, Aldo S.; Buchhave, Lars A.; Ciardi, David R.; Fiorenzano, Aldo F.; Harutyunyan, Avet

    2018-05-01

    We present the confirmation of a small, moderately irradiated (F = 155 ± 7 F ⊕) Neptune with a substantial gas envelope in a P = 11.8728787 ± 0.0000085 day orbit about a quiet, Sun-like G0V star Kepler-1655. Based on our analysis of the Kepler light curve, we determined Kepler-1655b’s radius to be 2.213 ± 0.082 R ⊕. We acquired 95 high-resolution spectra with Telescopio Nazionale Galileo/HARPS-N, enabling us to characterize the host star and determine an accurate mass for Kepler-1655b of 5.0{+/- }2.83.1 {M}\\oplus via Gaussian-process regression. Our mass determination excludes an Earth-like composition with 98% confidence. Kepler-1655b falls on the upper edge of the evaporation valley, in the relatively sparsely occupied transition region between rocky and gas-rich planets. It is therefore part of a population of planets that we should actively seek to characterize further.

  3. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  4. Acoustic Characterization of Grass-cover Ground

    DTIC Science & Technology

    2014-11-20

    for noise and rever- beration control. Examples of porous media are cements , ceramics, rocks, building insulation, foams and soil. Characterizing the...To perform the calibration of the tube an absorbing material with known acoustic properties is used. A sample of Melamine foam , 5 cm thick was used...system was calibrated using materials with known acous- tic properties in order to confirm accurate measurement of the system. Melamine foam 5 cm (1.97 in

  5. Magnetic Field Generation and B-Dot Sensor Characterization in the High Frequency Band

    DTIC Science & Technology

    2012-03-01

    date Dr. Andrew J, Terzuoli, PhD (Member) date Dr. Michael J. Havrilla, PhD (Member) date AFIT/GE/ENG/12-20 Abstract Designing a high frequency ( HF ...large wavelengths in the HF range make it difficult to accurately estimate from which direction a magnetic field is emitting. Accurate DF estimates are...necessary for search and rescue operations and geolocating RF emitters of interest. The primary goal of this research is to characterize the

  6. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  7. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  8. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  9. Characterization of lipopeptides produced by Bacillus licheniformis using liquid chromatography with accurate tandem mass spectrometry.

    PubMed

    Favaro, Gabriella; Bogialli, Sara; Di Gangi, Iole Maria; Nigris, Sebastiano; Baldan, Enrico; Squartini, Andrea; Pastore, Paolo; Baldan, Barbara

    2016-10-30

    The plant endophyte Bacillus licheniformis, isolated from leaves of Vitis vinifera, was studied to individuate and characterize the presence of bioactive lipopeptides having amino acidic structures. Crude extracts of liquid cultures were analyzed by ultra-high-performance liquid chromatography (UHPLC) coupled to a quadrupole time-of-flight (QTOF) mass analyzer. Chromatographic conditions were optimized in order to obtain an efficient separation of the different isobaric lipopeptides, avoiding merged fragmentations of co-eluted isomeric compounds and reducing possible cross-talk phenomena. Composition of the amino acids was outlined through the interpretation of the fragmentation behavior in tandem high-resolution mass spectrometry (HRMS/MS) mode, which showed both common-class and peculiar fragment ions. Both [M + H](+) and [M + Na](+) precursor ions were fragmented in order to differentiate some isobaric amino acids, i.e. Leu/Ile. Neutral losses characteristic of the iso acyl chain were also evidenced. More than 90 compounds belonging to the classes of surfactins and lichenysins, known as biosurfactant molecules, were detected. Sequential LC/HRMS/MS analysis was used to identify linear and cyclic lipopeptides, and to single out the presence of a large number of isomers not previously reported. Some critical issues related to the simultaneous selection of different compounds by the quadrupole filter were highlighted and partially solved, leading to tentative assignments of several structures. Linear lichenysins are described here for the first time. The approach was proved to be useful for the characterization of non-target lipopeptides, and proposes a rationale MS experimental scheme aimed to investigate the difference in amino acid sequence and/or in the acyl chain of the various congeners, when standards are not available. Results expanded the knowledge about production of linear and cyclic bioactive compounds from Bacillus licheniformis, clarifying the

  10. Advanced eddy current test signal analysis for steam generator tube defect classification and characterization

    NASA Astrophysics Data System (ADS)

    McClanahan, James Patrick

    Eddy Current Testing (ECT) is a Non-Destructive Examination (NDE) technique that is widely used in power generating plants (both nuclear and fossil) to test the integrity of heat exchanger (HX) and steam generator (SG) tubing. Specifically for this research, laboratory-generated, flawed tubing data were examined. The purpose of this dissertation is to develop and implement an automated method for the classification and an advanced characterization of defects in HX and SG tubing. These two improvements enhanced the robustness of characterization as compared to traditional bobbin-coil ECT data analysis methods. A more robust classification and characterization of the tube flaw in-situ (while the SG is on-line but not when the plant is operating), should provide valuable information to the power industry. The following are the conclusions reached from this research. A feature extraction program acquiring relevant information from both the mixed, absolute and differential data was successfully implemented. The CWT was utilized to extract more information from the mixed, complex differential data. Image Processing techniques used to extract the information contained in the generated CWT, classified the data with a high success rate. The data were accurately classified, utilizing the compressed feature vector and using a Bayes classification system. An estimation of the upper bound for the probability of error, using the Bhattacharyya distance, was successfully applied to the Bayesian classification. The classified data were separated according to flaw-type (classification) to enhance characterization. The characterization routine used dedicated, flaw-type specific ANNs that made the characterization of the tube flaw more robust. The inclusion of outliers may help complete the feature space so that classification accuracy is increased. Given that the eddy current test signals appear very similar, there may not be sufficient information to make an extremely accurate (>95

  11. What Scientific Applications can Benefit from Hardware Transactional Memory?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schindewolf, M; Bihari, B; Gyllenhaal, J

    2012-06-04

    Achieving efficient and correct synchronization of multiple threads is a difficult and error-prone task at small scale and, as we march towards extreme scale computing, will be even more challenging when the resulting application is supposed to utilize millions of cores efficiently. Transactional Memory (TM) is a promising technique to ease the burden on the programmer, but only recently has become available on commercial hardware in the new Blue Gene/Q system and hence the real benefit for realistic applications has not been studied, yet. This paper presents the first performance results of TM embedded into OpenMP on a prototype systemmore » of BG/Q and characterizes code properties that will likely lead to benefits when augmented with TM primitives. We first, study the influence of thread count, environment variables and memory layout on TM performance and identify code properties that will yield performance gains with TM. Second, we evaluate the combination of OpenMP with multiple synchronization primitives on top of MPI to determine suitable task to thread ratios per node. Finally, we condense our findings into a set of best practices. These are applied to a Monte Carlo Benchmark and a Smoothed Particle Hydrodynamics method. In both cases an optimized TM version, executed with 64 threads on one node, outperforms a simple TM implementation. MCB with optimized TM yields a speedup of 27.45 over baseline.« less

  12. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  13. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGES

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  14. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  15. HIFU Transducer Characterization Using a Robust Needle Hydrophone

    NASA Astrophysics Data System (ADS)

    Howard, Samuel M.; Zanelli, Claudio I.

    2007-05-01

    A robust needle hydrophone has been developed for HIFU transducer characterization and reported on earlier. After a brief review of the hydrophone design and performance, we demonstrate its use to characterize a 1.5 MHz, 10 cm diameter, F-number 1.5 spherically focused source driven to exceed an intensity of 1400 W/cm2at its focus. Quantitative characterization of this source at high powers is assisted by deconvolving the hydrophone's calibrated frequency response in order to accurately reflect the contribution of harmonics generated by nonlinear propagation in the water testing environment. Results are compared to measurements with a membrane hydrophone at 0.3% duty cycle and to theoretical calculations, using measurements of the field at the source's radiating surface as input to a numerical solution of the KZK equation.

  16. Calcium ions in aqueous solutions: Accurate force field description aided by ab initio molecular dynamics and neutron scattering

    NASA Astrophysics Data System (ADS)

    Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel

    2018-06-01

    We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.

  17. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    NASA Technical Reports Server (NTRS)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  18. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  19. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8

  20. Systems Characterization of Combustor Instabilities With Controls Design Emphasis

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2004-01-01

    This effort performed test data analysis in order to characterize the general behavior of combustor instabilities with emphasis on controls design. The analysis is performed on data obtained from two configurations of a laboratory combustor rig and from a developmental aero-engine combustor. The study has characterized several dynamic behaviors associated with combustor instabilities. These are: frequency and phase randomness, amplitude modulations, net random phase walks, random noise, exponential growth and intra-harmonic couplings. Finally, the very cause of combustor instabilities was explored and it could be attributed to a more general source-load type impedance interaction that includes the thermo-acoustic coupling. Performing these characterizations on different combustors allows for more accurate identification of the cause of these phenomena and their effect on instability.

  1. Digital Mapping and Environmental Characterization of National Wild and Scenic River Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Bosnall, Peter; Hetrick, Shelaine L

    2013-09-01

    Spatially accurate geospatial information is required to support decision-making regarding sustainable future hydropower development. Under a memorandum of understanding among several federal agencies, a pilot study was conducted to map a subset of National Wild and Scenic Rivers (WSRs) at a higher resolution and provide a consistent methodology for mapping WSRs across the United States and across agency jurisdictions. A subset of rivers (segments falling under the jurisdiction of the National Park Service) were mapped at a high resolution using the National Hydrography Dataset (NHD). The spatial extent and representation of river segments mapped at NHD scale were compared withmore » the prevailing geospatial coverage mapped at a coarser scale. Accurately digitized river segments were linked to environmental attribution datasets housed within the Oak Ridge National Laboratory s National Hydropower Asset Assessment Program database to characterize the environmental context of WSR segments. The results suggest that both the spatial scale of hydrography datasets and the adherence to written policy descriptions are critical to accurately mapping WSRs. The environmental characterization provided information to deduce generalized trends in either the uniqueness or the commonness of environmental variables associated with WSRs. Although WSRs occur in a wide range of human-modified landscapes, environmental data layers suggest that they provide habitats important to terrestrial and aquatic organisms and recreation important to humans. Ultimately, the research findings herein suggest that there is a need for accurate, consistent, mapping of the National WSRs across the agencies responsible for administering each river. Geospatial applications examining potential landscape and energy development require accurate sources of information, such as data layers that portray realistic spatial representations.« less

  2. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  3. MEMS-based platforms for mechanical manipulation and characterization of cells

    NASA Astrophysics Data System (ADS)

    Pan, Peng; Wang, Wenhui; Ru, Changhai; Sun, Yu; Liu, Xinyu

    2017-12-01

    Mechanical manipulation and characterization of single cells are important experimental techniques in biological and medical research. Because of the microscale sizes and highly fragile structures of cells, conventional cell manipulation and characterization techniques are not accurate and/or efficient enough or even cannot meet the more and more demanding needs in different types of cell-based studies. To this end, novel microelectromechanical systems (MEMS)-based technologies have been developed to improve the accuracy, efficiency, and consistency of various cell manipulation and characterization tasks, and enable new types of cell research. This article summarizes existing MEMS-based platforms developed for cell mechanical manipulation and characterization, highlights their specific design considerations making them suitable for their designated tasks, and discuss their advantages and limitations. In closing, an outlook into future trends is also provided.

  4. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  5. Towards a Transferable UAV-Based Framework for River Hydromorphological Characterization

    PubMed Central

    González, Rocío Ballesteros; Leinster, Paul; Wright, Ros

    2017-01-01

    The multiple protocols that have been developed to characterize river hydromorphology, partly in response to legislative drivers such as the European Union Water Framework Directive (EU WFD), make the comparison of results obtained in different countries challenging. Recent studies have analyzed the comparability of existing methods, with remote sensing based approaches being proposed as a potential means of harmonizing hydromorphological characterization protocols. However, the resolution achieved by remote sensing products may not be sufficient to assess some of the key hydromorphological features that are required to allow an accurate characterization. Methodologies based on high resolution aerial photography taken from Unmanned Aerial Vehicles (UAVs) have been proposed by several authors as potential approaches to overcome these limitations. Here, we explore the applicability of an existing UAV based framework for hydromorphological characterization to three different fluvial settings representing some of the distinct ecoregions defined by the WFD geographical intercalibration groups (GIGs). The framework is based on the automated recognition of hydromorphological features via tested and validated Artificial Neural Networks (ANNs). Results show that the framework is transferable to the Central-Baltic and Mediterranean GIGs with accuracies in feature identification above 70%. Accuracies of 50% are achieved when the framework is implemented in the Very Large Rivers GIG. The framework successfully identified vegetation, deep water, shallow water, riffles, side bars and shadows for the majority of the reaches. However, further algorithm development is required to ensure a wider range of features (e.g., chutes, structures and erosion) are accurately identified. This study also highlights the need to develop an objective and fit for purpose hydromorphological characterization framework to be adopted within all EU member states to facilitate comparison of results

  6. Towards a Transferable UAV-Based Framework for River Hydromorphological Characterization.

    PubMed

    Rivas Casado, Mónica; González, Rocío Ballesteros; Ortega, José Fernando; Leinster, Paul; Wright, Ros

    2017-09-26

    The multiple protocols that have been developed to characterize river hydromorphology, partly in response to legislative drivers such as the European Union Water Framework Directive (EU WFD), make the comparison of results obtained in different countries challenging. Recent studies have analyzed the comparability of existing methods, with remote sensing based approaches being proposed as a potential means of harmonizing hydromorphological characterization protocols. However, the resolution achieved by remote sensing products may not be sufficient to assess some of the key hydromorphological features that are required to allow an accurate characterization. Methodologies based on high resolution aerial photography taken from Unmanned Aerial Vehicles (UAVs) have been proposed by several authors as potential approaches to overcome these limitations. Here, we explore the applicability of an existing UAV based framework for hydromorphological characterization to three different fluvial settings representing some of the distinct ecoregions defined by the WFD geographical intercalibration groups (GIGs). The framework is based on the automated recognition of hydromorphological features via tested and validated Artificial Neural Networks (ANNs). Results show that the framework is transferable to the Central-Baltic and Mediterranean GIGs with accuracies in feature identification above 70%. Accuracies of 50% are achieved when the framework is implemented in the Very Large Rivers GIG. The framework successfully identified vegetation, deep water, shallow water, riffles, side bars and shadows for the majority of the reaches. However, further algorithm development is required to ensure a wider range of features (e.g., chutes, structures and erosion) are accurately identified. This study also highlights the need to develop an objective and fit for purpose hydromorphological characterization framework to be adopted within all EU member states to facilitate comparison of results.

  7. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  8. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  9. Memory conformity affects inaccurate memories more than accurate memories.

    PubMed

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  10. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  11. Implementing Shared Memory Parallelism in MCBEND

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Long, David; Dobson, Geoff

    2017-09-01

    MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  12. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, Glyn; Horton, Keith A.; Elias, Tamar; Garbeil, Harold; Mouginis-Mark, Peter J; Sutton, A. Jeff; Harris, Andrew J. L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Kīlauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s−1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements.

  13. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  14. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  15. Characterizing dispersal patterns in a threatened seabird with limited genetic structure

    Treesearch

    Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery

    2009-01-01

    Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...

  16. Ultrasound measurement apparatus for liquids characterization

    NASA Astrophysics Data System (ADS)

    Vieira, R. C.; Costa-Felix, R. P. B.

    2018-03-01

    The present paper discloses the validation of an experimental ultrasound apparatus and method for liquids characterization. The research aims to stablish a simple, reliable, accurate and portable way to identify contaminants in hydrocarbon substances, such as adulteration in gasoline. The results depicted so far demonstrated a general uncertainty of speed of sound assessment less than 10 m s-1, and distance accuracy of less than 1%. Those figures are good enough for an in-site device to evaluate possible contamination of fuels or other liquids.

  17. How many landmarks are enough to characterize shape and size variation?

    PubMed

    Watanabe, Akinobu

    2018-01-01

    Accurate characterization of morphological variation is crucial for generating reliable results and conclusions concerning changes and differences in form. Despite the prevalence of landmark-based geometric morphometric (GM) data in the scientific literature, a formal treatment of whether sampled landmarks adequately capture shape variation has remained elusive. Here, I introduce LaSEC (Landmark Sampling Evaluation Curve), a computational tool to assess the fidelity of morphological characterization by landmarks. This task is achieved by calculating how subsampled data converge to the pattern of shape variation in the full dataset as landmark sampling is increased incrementally. While the number of landmarks needed for adequate shape variation is dependent on individual datasets, LaSEC helps the user (1) identify under- and oversampling of landmarks; (2) assess robustness of morphological characterization; and (3) determine the number of landmarks that can be removed without compromising shape information. In practice, this knowledge could reduce time and cost associated with data collection, maintain statistical power in certain analyses, and enable the incorporation of incomplete, but important, specimens to the dataset. Results based on simulated shape data also reveal general properties of landmark data, including statistical consistency where sampling additional landmarks has the tendency to asymptotically improve the accuracy of morphological characterization. As landmark-based GM data become more widely adopted, LaSEC provides a systematic approach to evaluate and refine the collection of shape data--a goal paramount for accumulation and analysis of accurate morphological information.

  18. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov

    2016-06-15

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less

  19. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE PAGES

    Hager, Robert; Yoon, E. S.; Ku, S.; ...

    2016-04-04

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less

  20. High-accurate optical vector analysis based on optical single-sideband modulation

    NASA Astrophysics Data System (ADS)

    Xue, Min; Pan, Shilong

    2016-11-01

    Most of the efforts devoted to the area of optical communications were on the improvement of the optical spectral efficiency. Varies innovative optical devices are thus developed to finely manipulate the optical spectrum. Knowing the spectral responses of these devices, including the magnitude, phase and polarization responses, is of great importance for their fabrication and application. To achieve high-resolution characterization, optical vector analyzers (OVAs) based on optical single-sideband (OSSB) modulation have been proposed and developed. Benefiting from the mature and highresolution microwave technologies, the OSSB-based OVA can potentially achieve a resolution of sub-Hz. However, the accuracy is restricted by the measurement errors induced by the unwanted first-order sideband and the high-order sidebands in the OSSB signal, since electrical-to-optical conversion and optical-to-electrical conversion are essentially required to achieve high-resolution frequency sweeping and extract the magnitude and phase information in the electrical domain. Recently, great efforts have been devoted to improve the accuracy of the OSSB-based OVA. In this paper, the influence of the unwanted-sideband induced measurement errors and techniques for implementing high-accurate OSSB-based OVAs are discussed.

  1. Roofline model toolkit: A practical tool for architectural and program analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less

  2. Characterization of lipid-rich plaques using spectroscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Nam, Hyeong Soo; Song, Joon Woo; Jang, Sun-Joo; Lee, Jae Joong; Oh, Wang-Yuhl; Kim, Jin Won; Yoo, Hongki

    2016-07-01

    Intravascular optical coherence tomography (IV-OCT) is a high-resolution imaging method used to visualize the internal structures of walls of coronary arteries in vivo. However, accurate characterization of atherosclerotic plaques with gray-scale IV-OCT images is often limited by various intrinsic artifacts. In this study, we present an algorithm for characterizing lipid-rich plaques with a spectroscopic OCT technique based on a Gaussian center of mass (GCOM) metric. The GCOM metric, which reflects the absorbance properties of lipids, was validated using a lipid phantom. In addition, the proposed characterization method was successfully demonstrated in vivo using an atherosclerotic rabbit model and was found to have a sensitivity and specificity of 94.3% and 76.7% for lipid classification, respectively.

  3. Identification and accurate quantification of structurally related peptide impurities in synthetic human C-peptide by liquid chromatography-high resolution mass spectrometry.

    PubMed

    Li, Ming; Josephs, Ralf D; Daireaux, Adeline; Choteau, Tiphaine; Westwood, Steven; Wielgosz, Robert I; Li, Hongmei

    2018-06-04

    Peptides are an increasingly important group of biomarkers and pharmaceuticals. The accurate purity characterization of peptide calibrators is critical for the development of reference measurement systems for laboratory medicine and quality control of pharmaceuticals. The peptides used for these purposes are increasingly produced through peptide synthesis. Various approaches (for example mass balance, amino acid analysis, qNMR, and nitrogen determination) can be applied to accurately value assign the purity of peptide calibrators. However, all purity assessment approaches require a correction for structurally related peptide impurities in order to avoid biases. Liquid chromatography coupled to high resolution mass spectrometry (LC-hrMS) has become the key technique for the identification and accurate quantification of structurally related peptide impurities in intact peptide calibrator materials. In this study, LC-hrMS-based methods were developed and validated in-house for the identification and quantification of structurally related peptide impurities in a synthetic human C-peptide (hCP) material, which served as a study material for an international comparison looking at the competencies of laboratories to perform peptide purity mass fraction assignments. More than 65 impurities were identified, confirmed, and accurately quantified by using LC-hrMS. The total mass fraction of all structurally related peptide impurities in the hCP study material was estimated to be 83.3 mg/g with an associated expanded uncertainty of 3.0 mg/g (k = 2). The calibration hierarchy concept used for the quantification of individual impurities is described in detail. Graphical abstract ᅟ.

  4. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.

  5. Are Registration of Disease Codes for Adult Anaphylaxis Accurate in the Emergency Department?

    PubMed Central

    Choi, Byungho; Lee, Hyeji

    2018-01-01

    Purpose There has been active research on anaphylaxis, but many study subjects are limited to patients registered with anaphylaxis codes. However, anaphylaxis codes tend to be underused. The aim of this study was to investigate the accuracy of anaphylaxis code registration and the clinical characteristics of accurate and inaccurate anaphylaxis registration in anaphylactic patients. Methods This retrospective study evaluated the medical records of adult patients who visited the university hospital emergency department between 2012 and 2016. The study subjects were divided into the groups with accurate and inaccurate anaphylaxis codes registered under anaphylaxis and other allergy-related codes and symptom-related codes, respectively. Results Among 211,486 patients, 618 (0.29%) had anaphylaxis. Of these, 161 and 457 were assigned to the accurate and inaccurate coding groups, respectively. The average age, transportation to the emergency department, past anaphylaxis history, cancer history, and the cause of anaphylaxis differed between the 2 groups. Cutaneous symptom manifested more frequently in the inaccurate coding group, while cardiovascular and neurologic symptoms were more frequently observed in the accurate group. Severe symptoms and non-alert consciousness were more common in the accurate group. Oxygen supply, intubation, and epinephrine were more commonly used as treatments for anaphylaxis in the accurate group. Anaphylactic patients with cardiovascular symptoms, severe symptoms, and epinephrine use were more likely to be accurately registered with anaphylaxis disease codes. Conclusions In case of anaphylaxis, more patients were registered inaccurately under other allergy-related codes and symptom-related codes rather than accurately under anaphylaxis disease codes. Cardiovascular symptoms, severe symptoms, and epinephrine treatment were factors associated with accurate registration with anaphylaxis disease codes in patients with anaphylaxis. PMID:29411554

  6. Seeing and Being Seen: Predictors of Accurate Perceptions about Classmates’ Relationships

    PubMed Central

    Neal, Jennifer Watling; Neal, Zachary P.; Cappella, Elise

    2015-01-01

    This study examines predictors of observer accuracy (i.e. seeing) and target accuracy (i.e. being seen) in perceptions of classmates’ relationships in a predominantly African American sample of 420 second through fourth graders (ages 7 – 11). Girls, children in higher grades, and children in smaller classrooms were more accurate observers. Targets (i.e. pairs of children) were more accurately observed when they occurred in smaller classrooms of higher grades and involved same-sex, high-popularity, and similar-popularity children. Moreover, relationships between pairs of girls were more accurately observed than relationships between pairs of boys. As a set, these findings suggest the importance of both observer and target characteristics for children’s accurate perceptions of classroom relationships. Moreover, the substantial variation in observer accuracy and target accuracy has methodological implications for both peer-reported assessments of classroom relationships and the use of stochastic actor-based models to understand peer selection and socialization processes. PMID:26347582

  7. Argobots: A Lightweight Low-Level Threading and Tasking Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan

    In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this paper, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by the user or high-level programming model. We describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version.« less

  8. Argobots: A Lightweight Low-Level Threading and Tasking Framework

    DOE PAGES

    Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan; ...

    2017-10-24

    In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this article, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by the user or high-level programming model. Here, we describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version.« less

  9. Parallel protein secondary structure prediction based on neural networks.

    PubMed

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  10. Argobots: A Lightweight Low-Level Threading and Tasking Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan

    In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this article, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by the user or high-level programming model. Here, we describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version.« less

  11. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  12. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  13. Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.

    ERIC Educational Resources Information Center

    Zuckerman, June Trop

    This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…

  14. Highly accurate nephelometric titrimetry.

    PubMed

    Zhan, Xiancheng; Li, Chengrong; Li, Zhiyi; Yang, Xiucen; Zhong, Shuguang; Yi, Tao

    2004-02-01

    A method that accurately indicates the end-point of precipitation reactions by the measurement of the relative intensity of the scattered light in the titrate is presented. A new nephelometric titrator with an internal nephelometric sensor has been devised. The work of the titrator including the sensor and change in the turbidity of the titrate and intensity of the scattered light are described. The accuracy of the nephelometric titrimetry is discussed theoretically. The titration of NaCl with AgNO(3) serves as a model. A relative error as well as deviation is within 0.2% under the experimental conditions. The applicability of the titrimetry in pharmaceutical analyses, for example, phenytoin sodium and procaine hydrochloride, is generally illustrated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  15. Integrated Translatome and Proteome: Approach for Accurate Portraying of Widespread Multifunctional Aspects of Trichoderma

    PubMed Central

    Sharma, Vivek; Salwan, Richa; Sharma, P. N.; Gulati, Arvind

    2017-01-01

    Genome-wide studies of transcripts expression help in systematic monitoring of genes and allow targeting of candidate genes for future research. In contrast to relatively stable genomic data, the expression of genes is dynamic and regulated both at time and space level at different level in. The variation in the rate of translation is specific for each protein. Both the inherent nature of an mRNA molecule to be translated and the external environmental stimuli can affect the efficiency of the translation process. In biocontrol agents (BCAs), the molecular response at translational level may represents noise-like response of absolute transcript level and an adaptive response to physiological and pathological situations representing subset of mRNAs population actively translated in a cell. The molecular responses of biocontrol are complex and involve multistage regulation of number of genes. The use of high-throughput techniques has led to rapid increase in volume of transcriptomics data of Trichoderma. In general, almost half of the variations of transcriptome and protein level are due to translational control. Thus, studies are required to integrate raw information from different “omics” approaches for accurate depiction of translational response of BCAs in interaction with plants and plant pathogens. The studies on translational status of only active mRNAs bridging with proteome data will help in accurate characterization of only a subset of mRNAs actively engaged in translation. This review highlights the associated bottlenecks and use of state-of-the-art procedures in addressing the gap to accelerate future accomplishment of biocontrol mechanisms. PMID:28900417

  16. Landsat TM memory effect characterization and correction

    USGS Publications Warehouse

    Helder, D.; Boncyk, W.; Morfitt, R.

    1997-01-01

    Before radiometric calibration of Landsat Thematic Mapper (TM) data can be done accurately, it is necessary to minimize the effects of artifacts present in the data that originate in the instrument's signal processing path. These artifacts have been observed in downlinked image data since shortly after launch of Landsat 4 and 5. However, no comprehensive work has been done to characterize all the artifacts and develop methods for their correction. In this paper, the most problematic artifact is discussed: memory effect (ME). Characterization of this artifact is presented, including the parameters necessary for its correction. In addition, a correction algorithm is described that removes the artifact from TM imagery. It will be shown that this artifact causes significant radiometry errors, but the effect can be removed in a straightforward manner.

  17. Accurate vehicle classification including motorcycles using piezoelectric sensors.

    DOT National Transportation Integrated Search

    2013-03-01

    State and federal departments of transportation are charged with classifying vehicles and monitoring mileage traveled. Accurate data reporting enables suitable roadway design for safety and capacity. Vehicle classifiers currently employ inductive loo...

  18. How many atoms are required to characterize accurately trajectory fluctuations of a protein?

    NASA Astrophysics Data System (ADS)

    Cukier, Robert I.

    2010-06-01

    Large molecules, whose thermal fluctuations sample a complex energy landscape, exhibit motions on an extended range of space and time scales. Principal component analysis (PCA) is often used to extract dominant motions that in proteins are typically domain motions. These motions are captured in the large eigenvalue (leading) principal components. There is also information in the small eigenvalues, arising from approximate linear dependencies among the coordinates. These linear dependencies suggest that instead of using all the atom coordinates to represent a trajectory, it should be possible to use a reduced set of coordinates with little loss in the information captured by the large eigenvalue principal components. In this work, methods that can monitor the correlation (overlap) between a reduced set of atoms and any number of retained principal components are introduced. For application to trajectory data generated by simulations, where the overall translational and rotational motion needs to be eliminated before PCA is carried out, some difficulties with the overlap measures arise and methods are developed to overcome them. The overlap measures are evaluated for a trajectory generated by molecular dynamics for the protein adenylate kinase, which consists of a stable, core domain, and two more mobile domains, referred to as the LID domain and the AMP-binding domain. The use of reduced sets corresponding, for the smallest set, to one-eighth of the alpha carbon (CA) atoms relative to using all the CA atoms is shown to predict the dominant motions of adenylate kinase. The overlap between using all the CA atoms and all the backbone atoms is essentially unity for a sum over PCA modes that effectively capture the exact trajectory. A reduction to a few atoms (three in the LID and three in the AMP-binding domain) shows that at least the first principal component, characterizing a large part of the LID-binding and AMP-binding motion, is well described. Based on these

  19. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  20. Investigation into accurate mass capability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, with respect to radical ion species.

    PubMed

    Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth

    2006-05-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) has been shown to be an effective technique for the characterization of organometallic, coordination, and highly conjugated compounds. The preferred matrix is 2-[(2E)-3-(4-tert-butylphenyl)-2-methylprop-2-enylidene]malononitrile (DCTB), with radical ions observed. However, MALDI-TOFMS is generally not favored for accurate mass measurement. A specific method had to be developed for such compounds to assure the quality of our accurate mass results. Therefore, in this preliminary study, two methods of data acquisition, and both even-electron (EE+) ion and odd-electron (OE+.) radical ion mass calibration standards, have been investigated to establish the basic measurement technique. The benefit of this technique is demonstrated for a copper compound for which ions were observed by MALDI, but not by electrospray (ESI) or liquid secondary ion mass spectrometry (LSIMS); a mean mass accuracy error of -1.2 ppm was obtained.

  1. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  2. A combined parasitological molecular approach for noninvasive characterization of parasitic nematode communities in wild hosts

    USDA-ARS?s Scientific Manuscript database

    Most hosts are concurrently or sequentially infected with multiple parasites, thus fully understanding interactions between individual parasite species and their hosts depends on accurate characterization of the parasite community. For parasitic nematodes, non-invasive methods for obtaining quantita...

  3. Opto-electronic characterization of third-generation solar cells.

    PubMed

    Neukom, Martin; Züfle, Simon; Jenatsch, Sandra; Ruhstaller, Beat

    2018-01-01

    We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC 70 BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified.

  4. Design and development of a profilometer for the fast and accurate characterization of optical surfaces

    NASA Astrophysics Data System (ADS)

    Gómez-Pedrero, José A.; Rodríguez-Ibañez, Diego; Alonso, José; Quirgoa, Juan A.

    2015-09-01

    With the advent of techniques devised for the mass production of optical components made with surfaces of arbitrary form (also known as free form surfaces) in the last years, a parallel development of measuring systems adapted for these new kind of surfaces constitutes a real necessity for the industry. Profilometry is one of the preferred methods for the assessment of the quality of a surface, and is widely employed in the optical fabrication industry for the quality control of its products. In this work, we present the design, development and assembly of a new profilometer with five axis of movement, specifically suited to the measurement of medium size (up to 150 mm of diameter) "free-form" optical surfaces with sub-micrometer accuracy and low measuring times. The apparatus is formed by three X, Y, Z linear motorized positioners plus and additional angular and a tilt positioner employed to locate accurately the surface to be measured and the probe which can be a mechanical or an optical one, being optical one a confocal sensor based on chromatic aberration. Both optical and mechanical probes guarantee an accuracy lower than the micrometer in the determination of the surface height, thus ensuring an accuracy in the surface curvatures of the order of 0.01 D or better. An original calibration procedure based on the measurement of a precision sphere has been developed in order to correct the perpendicularity error between the axes of the linear positioners. To reduce the measuring time of the profilometer, a custom electronics, based on an Arduino™ controller, have been designed and produced in order to synchronize the five motorized positioners and the optical and mechanical probes so that a medium size surface (around 10 cm of diameter) with a dynamic range in curvatures of around 10 D, can be measured in less than 300 seconds (using three axes) keeping the resolution in height and curvature in the figures mentioned above.

  5. Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorrer, C.; Kang, I.

    2008-04-04

    Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.

  6. Accurate description of charged excitations in molecular solids from embedded many-body perturbation theory

    NASA Astrophysics Data System (ADS)

    Li, Jing; D'Avino, Gabriele; Duchemin, Ivan; Beljonne, David; Blase, Xavier

    2018-01-01

    We present a novel hybrid quantum/classical approach to the calculation of charged excitations in molecular solids based on the many-body Green's function G W formalism. Molecules described at the G W level are embedded into the crystalline environment modeled with an accurate classical polarizable scheme. This allows the calculation of electron addition and removal energies in the bulk and at crystal surfaces where charged excitations are probed in photoelectron experiments. By considering the paradigmatic case of pentacene and perfluoropentacene crystals, we discuss the different contributions from intermolecular interactions to electronic energy levels, distinguishing between polarization, which is accounted for combining quantum and classical polarizabilities, and crystal field effects, that can impact energy levels by up to ±0.6 eV. After introducing band dispersion, we achieve quantitative agreement (within 0.2 eV) on the ionization potential and electron affinity measured at pentacene and perfluoropentacene crystal surfaces characterized by standing molecules.

  7. Accurate characterization of carcinogenic DNA adducts using MALDI tandem time-of-flight mass spectrometry

    NASA Astrophysics Data System (ADS)

    Barnes, Charles A.; Chiu, Norman H. L.

    2009-01-01

    Many chemical carcinogens and their in vivo activated metabolites react readily with genomic DNA, and form covalently bound carcinogen-DNA adducts. Clinically, carcinogen-DNA adducts have been linked to various cancer diseases. Among the current methods for DNA adduct analysis, mass spectroscopic method allows the direct measurement of unlabeled DNA adducts. The goal of this study is to explore the use of matrix-assisted laser desorption/ionization tandem time-of-flight mass spectrometry (MALDI-TOF/TOF MS) to determine the identity of carcinogen-DNA adducts. Two of the known carcinogenic DNA adducts, namely N-(2'-deoxyguanosin-8-yl)-2-amino-1-methyl-6-phenyl-imidazo [4,5-b] pyridine (dG-C8-PhIP) and N-(2'-deoxyguanosin-8yl)-4-aminobiphenyl (dG-C8-ABP), were selected as our models. In MALDI-TOF MS measurements, the small matrix ion and its cluster ions did not interfere with the measurements of both selected dG adducts. To achieve a higher accuracy for the characterization of selected dG adducts, 1 keV collision energy in MALDI-TOF/TOF MS/MS was used to measure the adducts. In comparison to other MS/MS techniques with lower collision energies, more extensive precursor ion dissociations were observed. The detection of the corresponding fragment ions allowed the identities of guanine, PhIP or ABP, and the position of adduction to be confirmed. Some of the fragment ions of dG-C8-PhIP have not been reported by other MS/MS techniques.

  8. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    MedlinePlus

    ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts Seasonal allergic rhinitis known as hay fever is ... hay fever symptoms, it is important to monitor pollen counts so you can limit your exposure on days ...

  9. [Study on Accurately Controlling Discharge Energy Method Used in External Defibrillator].

    PubMed

    Song, Biao; Wang, Jianfei; Jin, Lian; Wu, Xiaomei

    2016-01-01

    This paper introduces a new method which controls discharge energy accurately. It is achieved by calculating target voltage based on transthoracic impedance and accurately controlling charging voltage and discharge pulse width. A new defibrillator is designed and programmed using this method. The test results show that this method is valid and applicable to all kinds of external defibrillators.

  10. New Solar PV Tool Accurately Calculates Degradation Rates, Saving Money and

    Science.gov Websites

    Guiding Business Decisions | News | NREL New Solar PV Tool Accurately Calculates Degradation Rates, Saving Money and Guiding Business Decisions News Release: New Solar PV Tool Accurately Calculates ; said Dirk Jordan, engineer and solar PV researcher at NREL. "We spent years building consensus in

  11. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  12. Numerical modeling of exciton-polariton Bose-Einstein condensate in a microcavity

    NASA Astrophysics Data System (ADS)

    Voronych, Oksana; Buraczewski, Adam; Matuszewski, Michał; Stobińska, Magdalena

    2017-06-01

    within a semiconductor microcavity. It is described by a set of nonlinear differential equations similar in spirit to the Gross-Pitaevskii (GP) equation, but their unique properties do not allow standard GP solving frameworks to be utilized. Finding an accurate and efficient numerical algorithm as well as development of optimized numerical software is necessary for effective theoretical investigation of exciton-polaritons. Solution method: A Runge-Kutta method of 4th order was employed to solve the set of differential equations describing exciton-polariton superfluids. The method was fitted for the exciton-polariton equations and further optimized. The C++ programs utilize OpenMP extensions and vector operations in order to fully utilize the computer hardware. Running time: 6h for 100 ps evolution, depending on the values of parameters

  13. Discrete sensors distribution for accurate plantar pressure analyses.

    PubMed

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  15. New experimental methodology, setup and LabView program for accurate absolute thermoelectric power and electrical resistivity measurements between 25 and 1600 K: application to pure copper, platinum, tungsten, and nickel at very high temperatures.

    PubMed

    Abadlia, L; Gasser, F; Khalouk, K; Mayoufi, M; Gasser, J G

    2014-09-01

    In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in this paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature.

  16. New experimental methodology, setup and LabView program for accurate absolute thermoelectric power and electrical resistivity measurements between 25 and 1600 K: Application to pure copper, platinum, tungsten, and nickel at very high temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abadlia, L.; Mayoufi, M.; Gasser, F.

    2014-09-15

    In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in thismore » paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature.« less

  17. Automated clustering-based workload characterization

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Menasce, Daniel A.; Yesha, Yelena

    1996-01-01

    The demands placed on the mass storage systems at various federal agencies and national laboratories are continuously increasing in intensity. This forces system managers to constantly monitor the system, evaluate the demand placed on it, and tune it appropriately using either heuristics based on experience or analytic models. Performance models require an accurate workload characterization. This can be a laborious and time consuming process. It became evident from our experience that a tool is necessary to automate the workload characterization process. This paper presents the design and discusses the implementation of a tool for workload characterization of mass storage systems. The main features of the tool discussed here are: (1)Automatic support for peak-period determination. Histograms of system activity are generated and presented to the user for peak-period determination; (2) Automatic clustering analysis. The data collected from the mass storage system logs is clustered using clustering algorithms and tightness measures to limit the number of generated clusters; (3) Reporting of varied file statistics. The tool computes several statistics on file sizes such as average, standard deviation, minimum, maximum, frequency, as well as average transfer time. These statistics are given on a per cluster basis; (4) Portability. The tool can easily be used to characterize the workload in mass storage systems of different vendors. The user needs to specify through a simple log description language how the a specific log should be interpreted. The rest of this paper is organized as follows. Section two presents basic concepts in workload characterization as they apply to mass storage systems. Section three describes clustering algorithms and tightness measures. The following section presents the architecture of the tool. Section five presents some results of workload characterization using the tool.Finally, section six presents some concluding remarks.

  18. TU-AB-BRC-03: Accurate Tissue Characterization for Monte Carlo Dose Calculation Using Dual-and Multi-Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, A; Bouchard, H

    Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less

  19. Accurately Characterizing the Importance of Wave-Particle Interactions in Radiation Belt Dynamics: The Pitfalls of Statistical Wave Representations

    NASA Technical Reports Server (NTRS)

    Murphy, Kyle R.; Mann, Ian R.; Rae, I. Jonathan; Sibeck, David G.; Watt, Clare E. J.

    2016-01-01

    Wave-particle interactions play a crucial role in energetic particle dynamics in the Earths radiation belts. However, the relative importance of different wave modes in these dynamics is poorly understood. Typically, this is assessed during geomagnetic storms using statistically averaged empirical wave models as a function of geomagnetic activity in advanced radiation belt simulations. However, statistical averages poorly characterize extreme events such as geomagnetic storms in that storm-time ultralow frequency wave power is typically larger than that derived over a solar cycle and Kp is a poor proxy for storm-time wave power.

  20. Accurate Identification of MCI Patients via Enriched White-Matter Connectivity Network

    NASA Astrophysics Data System (ADS)

    Wee, Chong-Yaw; Yap, Pew-Thian; Brownyke, Jeffery N.; Potter, Guy G.; Steffens, David C.; Welsh-Bohmer, Kathleen; Wang, Lihong; Shen, Dinggang

    Mild cognitive impairment (MCI), often a prodromal phase of Alzheimer's disease (AD), is frequently considered to be a good target for early diagnosis and therapeutic interventions of AD. Recent emergence of reliable network characterization techniques have made understanding neurological disorders at a whole brain connectivity level possible. Accordingly, we propose a network-based multivariate classification algorithm, using a collection of measures derived from white-matter (WM) connectivity networks, to accurately identify MCI patients from normal controls. An enriched description of WM connections, utilizing six physiological parameters, i.e., fiber penetration count, fractional anisotropy (FA), mean diffusivity (MD), and principal diffusivities (λ 1, λ 2, λ 3), results in six connectivity networks for each subject to account for the connection topology and the biophysical properties of the connections. Upon parcellating the brain into 90 regions-of-interest (ROIs), the average statistics of each ROI in relation to the remaining ROIs are extracted as features for classification. These features are then sieved to select the most discriminant subset of features for building an MCI classifier via support vector machines (SVMs). Cross-validation results indicate better diagnostic power of the proposed enriched WM connection description than simple description with any single physiological parameter.

  1. Mechanism for accurate, protein-assisted DNA annealing by Deinococcus radiodurans DdrB

    PubMed Central

    Sugiman-Marangos, Seiji N.; Weiss, Yoni M.; Junop, Murray S.

    2016-01-01

    Accurate pairing of DNA strands is essential for repair of DNA double-strand breaks (DSBs). How cells achieve accurate annealing when large regions of single-strand DNA are unpaired has remained unclear despite many efforts focused on understanding proteins, which mediate this process. Here we report the crystal structure of a single-strand annealing protein [DdrB (DNA damage response B)] in complex with a partially annealed DNA intermediate to 2.2 Å. This structure and supporting biochemical data reveal a mechanism for accurate annealing involving DdrB-mediated proofreading of strand complementarity. DdrB promotes high-fidelity annealing by constraining specific bases from unauthorized association and only releases annealed duplex when bound strands are fully complementary. To our knowledge, this mechanism provides the first understanding for how cells achieve accurate, protein-assisted strand annealing under biological conditions that would otherwise favor misannealing. PMID:27044084

  2. Manufacturing and advanced characterization of sub-25nm diameter CD-AFM probes with sub-10nm tip edges radius

    NASA Astrophysics Data System (ADS)

    Foucher, Johann; Filippov, Pavel; Penzkofer, Christian; Irmer, Bernd; Schmidt, Sebastian W.

    2013-04-01

    Atomic force microscopy (AFM) is increasingly used in the semiconductor industry as a versatile monitoring tool for highly critical lithography and etching process steps. Applications range from the inspection of the surface roughness of new materials, over accurate depth measurements to the determination of critical dimension structures. The aim to address the rapidly growing demands on measurement uncertainty and throughput more and more shifts the focus of attention to the AFM tip, which represents the crucial link between AFM tool and the sample to be monitored. Consequently, in order to reach the AFM tool's full potential, the performance of the AFM tip has to be considered as a determining parameter. Currently available AFM tips made from silicon are generally limited by their diameter, radius, and sharpness, considerably restricting the AFM measurement capabilities on sub-30nm spaces. In addition to that, there's lack of adequate characterization structures to accurately characterize sub-25nm tip diameters. Here, we present and discuss a recently introduced AFM tip design (T-shape like design) with precise tip diameters down to 15nm and tip radii down to 5nm fabricated from amorphous, high density diamond-like carbon (HDC/DLC) using electron beam induced processing (EBIP). In addition to that advanced design, we propose a new characterizer structure, which allows for accurate characterization and design control of sub-25nm tip diameters and sub-10nm tip edges radii. We demonstrate the potential advantages of combining a small tip shape design, i.e. tip diameter and tip edge radius, and an advanced tip characterizer for the semiconductor industry by the measurement of advanced lithography patterns.

  3. Monitoring circuit accurately measures movement of solenoid valve

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1966-01-01

    Solenoid operated valve in a control system powered by direct current issued to accurately measure the valve travel. This system is currently in operation with a 28-vdc power system used for control of fluids in liquid rocket motor test facilities.

  4. ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.

    PubMed

    Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P

    2016-11-01

    ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P < 0.0001). 394 (61.2%) of physicians' estimates about the percentage probability of post-thrombolysis symptomatic intracranial haemorrhage were accurate compared with 583 (90.5%) of SEDAN score estimates (P < 0.0001). 160 (24.8%) of physicians' estimates about post-thrombolysis 3-month percentage probability of mRS 0-2 were accurate compared with 240 (37.3%) DRAGON score estimates (P < 0.0001). 260 (40.4%) of physicians' estimates about the percentage probability of post-thrombolysis mRS 5-6 were accurate compared with 518 (80.4%) DRAGON score estimates (P < 0.0001). ASTRAL, DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.

  5. Ultrasonic characterization of solid liquid suspensions

    DOEpatents

    Panetta, Paul D.

    2010-06-22

    Using an ultrasonic field, properties of a solid liquid suspension such as through-transmission attenuation, backscattering, and diffuse field are measured. These properties are converted to quantities indicating the strength of different loss mechanisms (such as absorption, single scattering and multiple scattering) among particles in the suspension. Such separation of the loss mechanisms can allow for direct comparison of the attenuating effects of the mechanisms. These comparisons can also indicate a model most likely to accurately characterize the suspension and can aid in determination of properties such as particle size, concentration, and density of the suspension.

  6. Advances in molecular imaging for breast cancer detection and characterization

    PubMed Central

    2012-01-01

    Advances in our ability to assay molecular processes, including gene expression, protein expression, and molecular and cellular biochemistry, have fueled advances in our understanding of breast cancer biology and have led to the identification of new treatments for patients with breast cancer. The ability to measure biologic processes without perturbing them in vivo allows the opportunity to better characterize tumor biology and to assess how biologic and cytotoxic therapies alter critical pathways of tumor response and resistance. By accurately characterizing tumor properties and biologic processes, molecular imaging plays an increasing role in breast cancer science, clinical care in diagnosis and staging, assessment of therapeutic targets, and evaluation of responses to therapies. This review describes the current role and potential of molecular imaging modalities for detection and characterization of breast cancer and focuses primarily on radionuclide-based methods. PMID:22423895

  7. Accurate joint space quantification in knee osteoarthritis: a digital x-ray tomosynthesis phantom study

    NASA Astrophysics Data System (ADS)

    Sewell, Tanzania S.; Piacsek, Kelly L.; Heckel, Beth A.; Sabol, John M.

    2011-03-01

    The current imaging standard for diagnosis and monitoring of knee osteoarthritis (OA) is projection radiography. However radiographs may be insensitive to markers of early disease such as osteophytes and joint space narrowing (JSN). Relative to standard radiography, digital X-ray tomosynthesis (DTS) may provide improved visualization of the markers of knee OA without the interference of superimposed anatomy. DTS utilizes a series of low-dose projection images over an arc of +/-20 degrees to reconstruct tomographic images parallel to the detector. We propose that DTS can increase accuracy and precision in JSN quantification. The geometric accuracy of DTS was characterized by quantifying joint space width (JSW) as a function of knee flexion and position using physical and anthropomorphic phantoms. Using a commercially available digital X-ray system, projection and DTS images were acquired for a Lucite rod phantom with known gaps at various source-object-distances, and angles of flexion. Gap width, representative of JSW, was measured using a validated algorithm. Over an object-to-detector-distance range of 5-21cm, a 3.0mm gap width was reproducibly measured in the DTS images, independent of magnification. A simulated 0.50mm (+/-0.13) JSN was quantified accurately (95% CI 0.44-0.56mm) in the DTS images. Angling the rods to represent knee flexion, the minimum gap could be precisely determined from the DTS images and was independent of flexion angle. JSN quantification using DTS was insensitive to distance from patient barrier and flexion angle. Potential exists for the optimization of DTS for accurate radiographic quantification of knee OA independent of patient positioning.

  8. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  9. Radioactive Waste Characterization Strategies; Comparisons Between AK/PK, Dose to Curie Modeling, Gamma Spectroscopy, and Laboratory Analysis Methods- 12194

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singledecker, Steven J.; Jones, Scotty W.; Dorries, Alison M.

    2012-07-01

    In the coming fiscal years of potentially declining budgets, Department of Energy facilities such as the Los Alamos National Laboratory (LANL) will be looking to reduce the cost of radioactive waste characterization, management, and disposal processes. At the core of this cost reduction process will be choosing the most cost effective, efficient, and accurate methods of radioactive waste characterization. Central to every radioactive waste management program is an effective and accurate waste characterization program. Choosing between methods can determine what is classified as low level radioactive waste (LLRW), transuranic waste (TRU), waste that can be disposed of under an Authorizedmore » Release Limit (ARL), industrial waste, and waste that can be disposed of in municipal landfills. The cost benefits of an accurate radioactive waste characterization program cannot be overstated. In addition, inaccurate radioactive waste characterization of radioactive waste can result in the incorrect classification of radioactive waste leading to higher disposal costs, Department of Transportation (DOT) violations, Notice of Violations (NOVs) from Federal and State regulatory agencies, waste rejection from disposal facilities, loss of operational capabilities, and loss of disposal options. Any one of these events could result in the program that mischaracterized the waste losing its ability to perform it primary operational mission. Generators that produce radioactive waste have four characterization strategies at their disposal: - Acceptable Knowledge/Process Knowledge (AK/PK); - Indirect characterization using a software application or other dose to curie methodologies; - Non-Destructive Analysis (NDA) tools such as gamma spectroscopy; - Direct sampling (e.g. grab samples or Surface Contaminated Object smears) and laboratory analytical; Each method has specific advantages and disadvantages. This paper will evaluate each method detailing those advantages and disadvantages

  10. 48 CFR 552.215-72 - Price Adjustment-Failure To Provide Accurate Information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Price Adjustment-Failure... Provisions and Clauses 552.215-72 Price Adjustment—Failure To Provide Accurate Information. As prescribed in 515.408(d), insert the following clause: Price Adjustment—Failure To Provide Accurate Information (AUG...

  11. Characterization of Proxy Application Performance on Advanced Architectures. UMT2013, MCB, AMG2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, Louis H.; Gunney, Brian T.; Bhatele, Abhinav

    2015-10-09

    Three codes were tested at LLNL as part of a Tri-Lab effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. Teams from Sandia and Los Alamos tested proxy apps of their own. The focus in this report is on the LLNL codes UMT2013, MCB, and AMG2013. We present weak and strong MPI scaling results and studies of OpenMP efficiency on a large BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. Themore » hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Results from three more speculative tests are also included: one that exploits NVRAM as extended memory, one that studies performance under a power bound, and one that illustrates the effects of changing the torus network mapping on BG/Q.« less

  12. Accurate atomistic first-principles calculations of electronic stopping

    DOE PAGES

    Schleife, André; Kanai, Yosuke; Correa, Alfredo A.

    2015-01-20

    In this paper, we show that atomistic first-principles calculations based on real-time propagation within time-dependent density functional theory are capable of accurately describing electronic stopping of light projectile atoms in metal hosts over a wide range of projectile velocities. In particular, we employ a plane-wave pseudopotential scheme to solve time-dependent Kohn-Sham equations for representative systems of H and He projectiles in crystalline aluminum. This approach to simulate nonadiabatic electron-ion interaction provides an accurate framework that allows for quantitative comparison with experiment without introducing ad hoc parameters such as effective charges, or assumptions about the dielectric function. Finally, our work clearlymore » shows that this atomistic first-principles description of electronic stopping is able to disentangle contributions due to tightly bound semicore electrons and geometric aspects of the stopping geometry (channeling versus off-channeling) in a wide range of projectile velocities.« less

  13. No galaxy left behind: accurate measurements with the faintest objects in the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Suchyta, E.; Huff, E. M.; Aleksić, J.; Melchior, P.; Jouvel, S.; MacCrann, N.; Ross, A. J.; Crocce, M.; Gaztanaga, E.; Honscheid, K.; Leistedt, B.; Peiris, H. V.; Rykoff, E. S.; Sheldon, E.; Abbott, T.; Abdalla, F. B.; Allam, S.; Banerji, M.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; James, D. J.; Jarvis, M.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Miller, C. J.; Miquel, R.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Percival, W. J.; Reil, K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.; Zhang, Y.; DES Collaboration

    2016-03-01

    Accurate statistical measurement with large imaging surveys has traditionally required throwing away a sizable fraction of the data. This is because most measurements have relied on selecting nearly complete samples, where variations in the composition of the galaxy population with seeing, depth, or other survey characteristics are small. We introduce a new measurement method that aims to minimize this wastage, allowing precision measurement for any class of detectable stars or galaxies. We have implemented our proposal in BALROG, software which embeds fake objects in real imaging to accurately characterize measurement biases. We demonstrate this technique with an angular clustering measurement using Dark Energy Survey (DES) data. We first show that recovery of our injected galaxies depends on a variety of survey characteristics in the same way as the real data. We then construct a flux-limited sample of the faintest galaxies in DES, chosen specifically for their sensitivity to depth and seeing variations. Using the synthetic galaxies as randoms in the Landy-Szalay estimator suppresses the effects of variable survey selection by at least two orders of magnitude. With this correction, our measured angular clustering is found to be in excellent agreement with that of a matched sample from much deeper, higher resolution space-based Cosmological Evolution Survey (COSMOS) imaging; over angular scales of 0.004° < θ < 0.2°, we find a best-fitting scaling amplitude between the DES and COSMOS measurements of 1.00 ± 0.09. We expect this methodology to be broadly useful for extending measurements' statistical reach in a variety of upcoming imaging surveys.

  14. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  15. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory.

  16. Opto-electronic characterization of third-generation solar cells

    PubMed Central

    Jenatsch, Sandra

    2018-01-01

    Abstract We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC70BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified. PMID:29707069

  17. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.

    1995-01-01

    We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.

  18. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, S.A.; Killeen, K.P.; Lear, K.L.

    1995-03-14

    The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.

  19. Characterizing and mapping forest fire fuels using ASTER imagery and gradient modeling

    Treesearch

    Michael J. Falkowski; Paul E. Gessler; Penelope Morgan; Andrew T. Hudak; Alistair M. S. Smith

    2005-01-01

    Land managers need cost-effective methods for mapping and characterizing forest fuels quickly and accurately. The launch of satellite sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the advanced spaceborne thermal emission and...

  20. Current characterization methods for cellulose nanomaterials.

    PubMed

    Foster, E Johan; Moon, Robert J; Agarwal, Umesh P; Bortner, Michael J; Bras, Julien; Camarero-Espinosa, Sandra; Chan, Kathleen J; Clift, Martin J D; Cranston, Emily D; Eichhorn, Stephen J; Fox, Douglas M; Hamad, Wadood Y; Heux, Laurent; Jean, Bruno; Korey, Matthew; Nieh, World; Ong, Kimberly J; Reid, Michael S; Renneckar, Scott; Roberts, Rose; Shatkin, Jo Anne; Simonsen, John; Stinson-Bagby, Kelly; Wanasekara, Nandula; Youngblood, Jeff

    2018-04-23

    A new family of materials comprised of cellulose, cellulose nanomaterials (CNMs), having properties and functionalities distinct from molecular cellulose and wood pulp, is being developed for applications that were once thought impossible for cellulosic materials. Commercialization, paralleled by research in this field, is fueled by the unique combination of characteristics, such as high on-axis stiffness, sustainability, scalability, and mechanical reinforcement of a wide variety of materials, leading to their utility across a broad spectrum of high-performance material applications. However, with this exponential growth in interest/activity, the development of measurement protocols necessary for consistent, reliable and accurate materials characterization has been outpaced. These protocols, developed in the broader research community, are critical for the advancement in understanding, process optimization, and utilization of CNMs in materials development. This review establishes detailed best practices, methods and techniques for characterizing CNM particle morphology, surface chemistry, surface charge, purity, crystallinity, rheological properties, mechanical properties, and toxicity for two distinct forms of CNMs: cellulose nanocrystals and cellulose nanofibrils.

  1. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    DOE PAGES

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; ...

    2016-07-05

    Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.

  2. Characterizing human activity induced impulse and slip-pulse excitations through structural vibration

    NASA Astrophysics Data System (ADS)

    Pan, Shijia; Mirshekari, Mostafa; Fagert, Jonathon; Ramirez, Ceferino Gabriel; Chung, Albert Jin; Hu, Chih Chi; Shen, John Paul; Zhang, Pei; Noh, Hae Young

    2018-02-01

    Many human activities induce excitations on ambient structures with various objects, causing the structures to vibrate. Accurate vibration excitation source detection and characterization enable human activity information inference, hence allowing human activity monitoring for various smart building applications. By utilizing structural vibrations, we can achieve sparse and non-intrusive sensing, unlike pressure- and vision-based methods. Many approaches have been presented on vibration-based source characterization, and they often either focus on one excitation type or have limited performance due to the dispersion and attenuation effects of the structures. In this paper, we present our method to characterize two main types of excitations induced by human activities (impulse and slip-pulse) on multiple structures. By understanding the physical properties of waves and their propagation, the system can achieve accurate excitation tracking on different structures without large-scale labeled training data. Specifically, our algorithm takes properties of surface waves generated by impulse and of body waves generated by slip-pulse into account to handle the dispersion and attenuation effects when different types of excitations happen on various structures. We then evaluate the algorithm through multiple scenarios. Our method achieves up to a six times improvement in impulse localization accuracy and a three times improvement in slip-pulse trajectory length estimation compared to existing methods that do not take wave properties into account.

  3. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  4. Can cancer researchers accurately judge whether preclinical reports will reproduce?

    PubMed Central

    Mandel, David R.; Kimmelman, Jonathan

    2017-01-01

    There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB) to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported). Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies. PMID:28662052

  5. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  6. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  7. Ocean outfall plume characterization using an Autonomous Underwater Vehicle.

    PubMed

    Rogowski, Peter; Terrill, Eric; Otero, Mark; Hazard, Lisa; Middleton, William

    2013-01-01

    A monitoring mission to map and characterize the Point Loma Ocean Outfall (PLOO) wastewater plume using an Autonomous Underwater Vehicle (AUV) was performed on 3 March 2011. The mobility of an AUV provides a significant advantage in surveying discharge plumes over traditional cast-based methods, and when combined with optical and oceanographic sensors, provides a capability for both detecting plumes and assessing their mixing in the near and far-fields. Unique to this study is the measurement of Colored Dissolved Organic Matter (CDOM) in the discharge plume and its application for quantitative estimates of the plume's dilution. AUV mission planning methodologies for discharge plume sampling, plume characterization using onboard optical sensors, and comparison of observational data to model results are presented. The results suggest that even under variable oceanic conditions, properly planned missions for AUVs equipped with an optical CDOM sensor in addition to traditional oceanographic sensors, can accurately characterize and track ocean outfall plumes at higher resolutions than cast-based techniques.

  8. Medipix2 as a tool for proton beam characterization

    NASA Astrophysics Data System (ADS)

    Bisogni, M. G.; Cirrone, G. A. P.; Cuttone, G.; Del Guerra, A.; Lojacono, P.; Piliero, M. A.; Romano, F.; Rosso, V.; Sipala, V.; Stefanini, A.

    2009-08-01

    Proton therapy is a technique used to deliver a highly accurate and effective dose for the treatment of a variety of tumor diseases. The possibility to have an instrument able to give online information could reduce the time necessary to characterize the proton beam. To this aim we propose a detection system for online proton beam characterization based on the Medipix2 chip. Medipix2 is a detection system based on a single event counter read-out chip, bump-bonded to silicon pixel detector. The read-out chip is a matrix of 256×256 cells, 55×55 μm 2 each. To demonstrate the capabilities of Medipix2 as a proton detector, we have used a 62 MeV flux proton beam at the CATANA beam line of the LNS-INFN laboratory. The measurements performed confirmed the good imaging performances of the Medipix2 system also for the characterization of proton beams.

  9. Inflatable bladder provides accurate calibration of pressure switch

    NASA Technical Reports Server (NTRS)

    Smith, N. J.

    1965-01-01

    Calibration of a pressure switch is accurately checked by a thin-walled circular bladder. It is placed in the pressure switch and applies force to the switch diaphragm when expanded by an external pressure source. The disturbance to the normal operation of the switch is minimal.

  10. Transverse Tension Fatigue Life Characterization Through Flexure Testing of Composite Materials

    NASA Technical Reports Server (NTRS)

    OBrien, T. Kevin; Chawan, Arun D.; Krueger, Ronald; Paris, Isabelle

    2001-01-01

    The transverse tension fatigue life of S2/8552 glass-epoxy and IM7/8552 carbon-epoxy was characterized using flexure tests of 90-degree laminates loaded in 3-point and 4-point bending. The influence of specimen polishing and specimen configuration on transverse tension fatigue life was examined using the glass-epoxy laminates. Results showed that 90-degree bend specimens with polished machined edges and polished tension-side surfaces, where bending failures where observed, had lower fatigue lives than unpolished specimens when cyclically loaded at equal stress levels. The influence of specimen thickness and the utility of a Weibull scaling law was examined using the carbon-epoxy laminates. The influence of test frequency on fatigue results was also documented for the 4-point bending configuration. A Weibull scaling law was used to predict the 4-point bending fatigue lives from the 3-point bending curve fit and vice-versa. Scaling was performed based on maximum cyclic stress level as well as fatigue life. The scaling laws based on stress level shifted the curve fit S-N characterizations in the desired direction, however, the magnitude of the shift was not adequate to accurately predict the fatigue lives. Furthermore, the scaling law based on fatigue life shifted the curve fit S-N characterizations in the opposite direction from measured values. Therefore, these scaling laws were not adequate for obtaining accurate predictions of the transverse tension fatigue lives.

  11. Accurate structure, thermodynamics and spectroscopy of medium-sized radicals by hybrid Coupled Cluster/Density Functional Theory approaches: the case of phenyl radical

    PubMed Central

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Egidi, Franco; Puzzarini, Cristina

    2015-01-01

    The CCSD(T) model coupled with extrapolation to the complete basis-set limit and additive approaches represents the “golden standard” for the structural and spectroscopic characterization of building blocks of biomolecules and nanosystems. However, when open-shell systems are considered, additional problems related to both specific computational difficulties and the need of obtaining spin-dependent properties appear. In this contribution, we present a comprehensive study of the molecular structure and spectroscopic (IR, Raman, EPR) properties of the phenyl radical with the aim of validating an accurate computational protocol able to deal with conjugated open-shell species. We succeeded in obtaining reliable and accurate results, thus confirming and, partly, extending the available experimental data. The main issue to be pointed out is the need of going beyond the CCSD(T) level by including a full treatment of triple excitations in order to fulfil the accuracy requirements. On the other hand, the reliability of density functional theory in properly treating open-shell systems has been further confirmed. PMID:23802956

  12. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    NASA Astrophysics Data System (ADS)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  13. Treating knee pain: history taking and accurate diagnoses.

    PubMed

    Barratt, Julian

    2010-07-01

    Prompt and effective diagnosis and treatment for common knee problems depend on practitioners' ability to distinguish between traumatic and inflammatory knee conditions. This article aims to enable practitioners to make accurate assessments, carry out knee examinations and undertake selected special tests as necessary before discharging or referring patients.

  14. Foresight begins with FMEA. Delivering accurate risk assessments.

    PubMed

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  15. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  16. DebriSat Fragment Characterization System and Processing Status

    NASA Technical Reports Server (NTRS)

    Rivero, M.; Shiotani, B.; M. Carrasquilla; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.

    2016-01-01

    The DebriSat project is a continuing effort sponsored by NASA and DoD to update existing break-up models using data obtained from hypervelocity impact tests performed to simulate on-orbit collisions. After the impact tests, a team at the University of Florida has been working to characterize the fragments in terms of their mass, size, shape, color and material content. The focus of the post-impact effort has been the collection of 2 mm and larger fragments resulting from the hypervelocity impact test. To date, in excess of 125K fragments have been recovered which is approximately 40K more than the 85K fragments predicted by the existing models. While the fragment collection activities continue, there has been a transition to the characterization of the recovered fragments. Since the start of the characterization effort, the focus has been on the use of automation to (i) expedite the fragment characterization process and (ii) minimize the effects of human subjectivity on the results; e.g., automated data entry processes were developed and implemented to minimize errors during transcription of the measurement data. At all steps of the process, however, there is human oversight to ensure the integrity of the data. Additionally, repeatability and reproducibility tests have been developed and implemented to ensure that the instrumentations used in the characterization process are accurate and properly calibrated.

  17. Non-contact method for characterization of small size thermoelectric modules.

    PubMed

    Manno, Michael; Yang, Bao; Bar-Cohen, Avram

    2015-08-01

    Conventional techniques for characterization of thermoelectric performance require bringing measurement equipment into direct contact with the thermoelectric device, which is increasingly error prone as device size decreases. Therefore, the novel work presented here describes a non-contact technique, capable of accurately measuring the maximum ΔT and maximum heat pumping of mini to micro sized thin film thermoelectric coolers. The non-contact characterization method eliminates the measurement errors associated with using thermocouples and traditional heat flux sensors to test small samples and large heat fluxes. Using the non-contact approach, an infrared camera, rather than thermocouples, measures the temperature of the hot and cold sides of the device to determine the device ΔT and a laser is used to heat to the cold side of the thermoelectric module to characterize its heat pumping capacity. As a demonstration of the general applicability of the non-contact characterization technique, testing of a thin film thermoelectric module is presented and the results agree well with those published in the literature.

  18. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  19. Filtering Raw Terrestrial Laser Scanning Data for Efficient and Accurate Use in Geomorphologic Modeling

    NASA Astrophysics Data System (ADS)

    Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.

    2011-12-01

    Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.

  20. A focal-spot diagnostic for on-shot characterization of high-energy petawatt lasers.

    PubMed

    Bromage, J; Bahk, S-W; Irwin, D; Kwiatkowski, J; Pruyne, A; Millecchia, M; Moore, M; Zuegel, J D

    2008-10-13

    An on-shot focal-spot diagnostic for characterizing high-energy, petawatt-class laser systems is presented. Accurate measurements at full energy are demonstrated using high-resolution wavefront sensing in combination with techniques to calibrate on-shot measurements with low-power sample beams. Results are shown for full-energy activation shots of the OMEGA EP Laser System.

  1. Rapid and Accurate Evaluation of the Quality of Commercial Organic Fertilizers Using Near Infrared Spectroscopy

    PubMed Central

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313

  2. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    PubMed

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  3. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  4. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  5. Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers

    NASA Astrophysics Data System (ADS)

    Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.

    2013-09-01

    Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.

  6. A generalized gamma mixture model for ultrasonic tissue characterization.

    PubMed

    Vegas-Sanchez-Ferrero, Gonzalo; Aja-Fernandez, Santiago; Palencia, Cesar; Martin-Fernandez, Marcos

    2012-01-01

    Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.

  7. A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization

    PubMed Central

    Palencia, Cesar; Martin-Fernandez, Marcos

    2012-01-01

    Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images. PMID:23424602

  8. Dispersive liquid-liquid microextraction and gas chromatography accurate mass spectrometry for extraction and non-targeted profiling of volatile and semi-volatile compounds in grape marc distillates.

    PubMed

    Fontana, Ariel; Rodríguez, Isaac; Cela, Rafael

    2018-04-20

    The suitability of dispersive liquid-liquid microextraction (DLLME) and gas chromatography accurate mass spectrometry (GC-MS), based on a time-of-flight (TOF) MS analyzer and using electron ionization (EI), for the characterization of volatile and semi-volatile profiles of grape marc distillates (grappa) are evaluated. DLLME conditions are optimized with a selection of compounds, from different chemical families, present in the distillate spirit. Under final working conditions, 2.5 mL of sample and 0.5 mL of organic solvents are consumed in the sample preparation process. The absolute extraction efficiencies ranged from 30 to 100%, depending on the compound. For the same sample volume, DLLME provided higher responses than solid-phase microextraction (SPME) for most of the model compounds. The GC-EI-TOF-MS records of grappa samples were processed using a data mining non-targeted search algorithm. In this way, chromatographic peaks and accurate EI-MS spectra of sample components were linked. The identities of more than 140 of these components are proposed from comparison of their accurate spectra with those in a low resolution EI-MS database, accurate masses of most intense fragment ions of known structure, and available chromatographic retention index. The use of chromatographic and spectral data, associated to the set of components mined from different grappa samples, for multivariate analysis purposes is also illustrated in the study. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Accurate bond energies of hydrocarbons from complete basis set extrapolated multi-reference singles and doubles configuration interaction.

    PubMed

    Oyeyemi, Victor B; Pavone, Michele; Carter, Emily A

    2011-12-09

    Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: 1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; 2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and 3) DFT-B3LYP calculations of minimum-energy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of CC and CH bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. No Galaxy Left Behind: Accurate Measurements with the Faintest Objects in the Dark Energy Survey

    DOE PAGES

    Suchyta, E.

    2016-01-27

    Accurate statistical measurement with large imaging surveys has traditionally required throwing away a sizable fraction of the data. This is because most measurements have have relied on selecting nearly complete samples, where variations in the composition of the galaxy population with seeing, depth, or other survey characteristics are small. We introduce a new measurement method that aims to minimize this wastage, allowing precision measurement for any class of stars or galaxies detectable in an imaging survey. We have implemented our proposal in Balrog, a software package which embeds fake objects in real imaging in order to accurately characterize measurement biases.more » We also demonstrate this technique with an angular clustering measurement using Dark Energy Survey (DES) data. We first show that recovery of our injected galaxies depends on a wide variety of survey characteristics in the same way as the real data. We then construct a flux-limited sample of the faintest galaxies in DES, chosen specifically for their sensitivity to depth and seeing variations. Using the synthetic galaxies as randoms in the standard LandySzalay correlation function estimator suppresses the effects of variable survey selection by at least two orders of magnitude. Now our measured angular clustering is found to be in excellent agreement with that of a matched sample drawn from much deeper, higherresolution space-based COSMOS imaging; over angular scales of 0.004° < θ < 0.2 ° , we find a best-fit scaling amplitude between the DES and COSMOS measurements of 1.00 ± 0.09. We expect this methodology to be broadly useful for extending the statistical reach of measurements in a wide variety of coming imaging surveys.« less

  11. pyQms enables universal and accurate quantification of mass spectrometry data.

    PubMed

    Leufken, Johannes; Niehues, Anna; Sarin, L Peter; Wessel, Florian; Hippler, Michael; Leidel, Sebastian A; Fufezan, Christian

    2017-10-01

    Quantitative mass spectrometry (MS) is a key technique in many research areas (1), including proteomics, metabolomics, glycomics, and lipidomics. Because all of the corresponding molecules can be described by chemical formulas, universal quantification tools are highly desirable. Here, we present pyQms, an open-source software for accurate quantification of all types of molecules measurable by MS. pyQms uses isotope pattern matching that offers an accurate quality assessment of all quantifications and the ability to directly incorporate mass spectrometer accuracy. pyQms is, due to its universal design, applicable to every research field, labeling strategy, and acquisition technique. This opens ultimate flexibility for researchers to design experiments employing innovative and hitherto unexplored labeling strategies. Importantly, pyQms performs very well to accurately quantify partially labeled proteomes in large scale and high throughput, the most challenging task for a quantification algorithm. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. A combined parasitological molecular approach for noninvasive characterization of parasitic nematode communities in wild hosts.

    PubMed

    Budischak, Sarah A; Hoberg, Eric P; Abrams, Art; Jolles, Anna E; Ezenwa, Vanessa O

    2015-09-01

    Most hosts are concurrently or sequentially infected with multiple parasites; thus, fully understanding interactions between individual parasite species and their hosts depends on accurate characterization of the parasite community. For parasitic nematodes, noninvasive methods for obtaining quantitative, species-specific infection data in wildlife are often unreliable. Consequently, characterization of gastrointestinal nematode communities of wild hosts has largely relied on lethal sampling to isolate and enumerate adult worms directly from the tissues of dead hosts. The necessity of lethal sampling severely restricts the host species that can be studied, the adequacy of sample sizes to assess diversity, the geographic scope of collections and the research questions that can be addressed. Focusing on gastrointestinal nematodes of wild African buffalo, we evaluated whether accurate characterization of nematode communities could be made using a noninvasive technique that combined conventional parasitological approaches with molecular barcoding. To establish the reliability of this new method, we compared estimates of gastrointestinal nematode abundance, prevalence, richness and community composition derived from lethal sampling with estimates derived from our noninvasive approach. Our noninvasive technique accurately estimated total and species-specific worm abundances, as well as worm prevalence and community composition when compared to the lethal sampling method. Importantly, the rate of parasite species discovery was similar for both methods, and only a modest number of barcoded larvae (n = 10) were needed to capture key aspects of parasite community composition. Overall, this new noninvasive strategy offers numerous advantages over lethal sampling methods for studying nematode-host interactions in wildlife and can readily be applied to a range of study systems. © 2015 John Wiley & Sons Ltd.

  13. System Characterization Results for the QuickBird Sensor

    NASA Technical Reports Server (NTRS)

    Holekamp, Kara; Ross, Kenton; Blonski, Slawomir

    2007-01-01

    An overall system characterization was performed on several DigitalGlobe' QuickBird image products by the NASA Applied Research & Technology Project Office (formerly the Applied Sciences Directorate) at the John C. Stennis Space Center. This system characterization incorporated geopositional accuracy assessments, a spatial resolution assessment, and a radiometric calibration assessment. Geopositional assessments of standard georeferenced multispectral products were obtained using an array of accurately surveyed geodetic targets evenly spaced throughout a scene. Geopositional accuracy was calculated in terms of circular error. Spatial resolution of QuickBird panchromatic imagery was characterized based on edge response measurements using edge targets and the tilted-edge technique. Relative edge response was estimated as a geometric mean of normalized edge response differences measured in two directions of image pixels at points distanced from the edge by -0.5 and 0.5 of ground sample distance. A reflectance-based vicarious calibration approach, based on ground-based measurements and radiative transfer calculations, was used to estimate at-sensor radiance. These values were compared to those measured by the sensor to determine the sensor's radiometric accuracy. All imagery analyzed was acquired between fall 2005 and spring 2006. These characterization results were compared to previous years' results to identify any temporal drifts or trends.

  14. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  15. Characterizing Topology of Probabilistic Biological Networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-09-06

    Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.

  16. Systematic characterization of maturation time of fluorescent proteins in living cells

    PubMed Central

    Balleza, Enrique; Kim, J. Mark; Cluzel, Philippe

    2017-01-01

    Slow maturation time of fluorescent proteins limits accurate measurement of rapid gene expression dynamics and effectively reduces fluorescence signal in growing cells. We used high-precision time-lapse microscopy to characterize, at two different temperatures in E. coli, the maturation kinetics of 50 FPs that span the visible spectrum. We identified fast-maturing FPs that yield the highest signal-to-noise ratio and temporal resolution in individual growing cells. PMID:29320486

  17. Can computerized tomography accurately stage childhood renal tumors?

    PubMed

    Abdelhalim, Ahmed; Helmy, Tamer E; Harraz, Ahmed M; Abou-El-Ghar, Mohamed E; Dawaba, Mohamed E; Hafez, Ashraf T

    2014-07-01

    Staging of childhood renal tumors is crucial for treatment planning and outcome prediction. We sought to identify whether computerized tomography could accurately predict the local stage of childhood renal tumors. We retrospectively reviewed our database for patients diagnosed with childhood renal tumors and treated surgically between 1990 and 2013. Inability to retrieve preoperative computerized tomography, intraoperative tumor spillage and nonWilms childhood renal tumors were exclusion criteria. Local computerized tomography stage was assigned by a single experienced pediatric radiologist blinded to the pathological stage, using a consensus similar to the Children's Oncology Group Wilms tumor staging system. Tumors were stratified into up-front surgery and preoperative chemotherapy groups. The radiological stage of each tumor was compared to the pathological stage. A total of 189 tumors in 179 patients met inclusion criteria. Computerized tomography staging matched pathological staging in 68% of up-front surgery (70 of 103), 31.8% of pre-chemotherapy (21 of 66) and 48.8% of post-chemotherapy scans (42 of 86). Computerized tomography over staged 21.4%, 65.2% and 46.5% of tumors in the up-front surgery, pre-chemotherapy and post-chemotherapy scans, respectively, and under staged 10.7%, 3% and 4.7%. Computerized tomography staging was more accurate in tumors managed by up-front surgery (p <0.001) and those without extracapsular extension (p <0.001). The validity of computerized tomography staging of childhood renal tumors remains doubtful. This staging is more accurate for tumors treated with up-front surgery and those without extracapsular extension. Preoperative computerized tomography can help to exclude capsular breach. Treatment strategy should be based on surgical and pathological staging to avoid the hazards of inaccurate staging. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. An unexpected way forward: towards a more accurate and rigorous protein-protein binding affinity scoring function by eliminating terms from an already simple scoring function.

    PubMed

    Swanson, Jon; Audie, Joseph

    2018-01-01

    A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.

  19. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    PubMed Central

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  20. Indexed variation graphs for efficient and accurate resistome profiling.

    PubMed

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  1. Individual Differences in Accurately Judging Personality From Text.

    PubMed

    Hall, Judith A; Goh, Jin X; Mast, Marianne Schmid; Hagedorn, Christian

    2016-08-01

    This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. © 2015 Wiley Periodicals, Inc.

  2. Fractional Derivative Models for Ultrasonic Characterization of Polymer and Breast Tissue Viscoelasticity

    PubMed Central

    Coussot, Cecile; Kalyanam, Sureshkumar; Yapp, Rebecca; Insana, Michael F.

    2009-01-01

    The viscoelastic response of hydropolymers, which include glandular breast tissues, may be accurately characterized for some applications with as few as 3 rheological parameters by applying the Kelvin-Voigt fractional derivative (KVFD) modeling approach. We describe a technique for ultrasonic imaging of KVFD parameters in media undergoing unconfined, quasi-static, uniaxial compression. We analyze the KVFD parameter values in simulated and experimental echo data acquired from phantoms and show that the KVFD parameters may concisely characterize the viscoelastic properties of hydropolymers. We then interpret the KVFD parameter values for normal and cancerous breast tissues and hypothesize that this modeling approach may ultimately be applied to tumor differentiation. PMID:19406700

  3. In-vivo analysis of ankle joint movement for patient-specific kinematic characterization.

    PubMed

    Ferraresi, Carlo; De Benedictis, Carlo; Franco, Walter; Maffiodo, Daniela; Leardini, Alberto

    2017-09-01

    In this article, a method for the experimental in-vivo characterization of the ankle kinematics is proposed. The method is meant to improve personalization of various ankle joint treatments, such as surgical decision-making or design and application of an orthosis, possibly to increase their effectiveness. This characterization in fact would make the treatments more compatible with the specific patient's joint physiological conditions. This article describes the experimental procedure and the analytical method adopted, based on the instantaneous and mean helical axis theories. The results obtained in this experimental analysis reveal that more accurate techniques are necessary for a robust in-vivo assessment of the tibio-talar axis of rotation.

  4. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  5. Accurate Phylogenetic Tree Reconstruction from Quartets: A Heuristic Approach

    PubMed Central

    Reaz, Rezwana; Bayzid, Md. Shamsuzzoha; Rahman, M. Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A ‘quartet’ is an unrooted tree over taxa, hence the quartet-based supertree methods combine many -taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets. PMID:25117474

  6. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  7. Simplifying Nanowire Hall Effect Characterization by Using a Three-Probe Device Design.

    PubMed

    Hultin, Olof; Otnes, Gaute; Samuelson, Lars; Storm, Kristian

    2017-02-08

    Electrical characterization of nanowires is a time-consuming and challenging task due to the complexity of single nanowire device fabrication and the difficulty in interpreting the measurements. We present a method to measure Hall effect in nanowires using a three-probe device that is simpler to fabricate than previous four-probe nanowire Hall devices and allows characterization of nanowires with smaller diameter. Extraction of charge carrier concentration from the three-probe measurements using an analytical model is discussed and compared to simulations. The validity of the method is experimentally verified by a comparison between results obtained with the three-probe method and results obtained using four-probe nanowire Hall measurements. In addition, a nanowire with a diameter of only 65 nm is characterized to demonstrate the capabilities of the method. The three-probe Hall effect method offers a relatively fast and simple, yet accurate way to quantify the charge carrier concentration in nanowires and has the potential to become a standard characterization technique for nanowires.

  8. Characterization of Low Pressure Cold Plasma in the Cleaning of Contaminated Surfaces

    NASA Technical Reports Server (NTRS)

    Lanz, Devin Garrett; Hintze, Paul E.

    2016-01-01

    The characterization of low pressure cold plasma is a broad topic which would benefit many different applications involving such plasma. The characterization described in this paper focuses on cold plasma used as a medium in cleaning and disinfection applications. Optical Emission Spectroscopy (OES) and Mass Spectrometry (MS) are the two analytical methods used in this paper to characterize the plasma. OES analyzes molecules in the plasma phase by displaying the light emitted by the plasma molecules on a graph of wavelength vs. intensity. OES was most useful in identifying species which may interact with other molecules in the plasma, such as atomic oxygen or hydroxide radicals. Extracting useful data from the MS is done by filtering out the peaks generated by expected molecules and looking for peaks caused by foreign ones leaving the plasma chamber. This paper describes the efforts at setting up and testing these methods in order to accurately and effectively characterize the plasma.

  9. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.

  10. Hydrogen atoms can be located accurately and precisely by x-ray crystallography

    PubMed Central

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M.; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-01-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A–H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A–H bond lengths with those from neutron measurements for A–H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  11. Aerodynamic Characterization of a Modern Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Holland, Scott D.; Blevins, John A.

    2011-01-01

    A modern launch vehicle is by necessity an extremely integrated design. The accurate characterization of its aerodynamic characteristics is essential to determine design loads, to design flight control laws, and to establish performance. The NASA Ares Aerodynamics Panel has been responsible for technical planning, execution, and vetting of the aerodynamic characterization of the Ares I vehicle. An aerodynamics team supporting the Panel consists of wind tunnel engineers, computational engineers, database engineers, and other analysts that address topics such as uncertainty quantification. The team resides at three NASA centers: Langley Research Center, Marshall Space Flight Center, and Ames Research Center. The Panel has developed strategies to synergistically combine both the wind tunnel efforts and the computational efforts with the goal of validating the computations. Selected examples highlight key flow physics and, where possible, the fidelity of the comparisons between wind tunnel results and the computations. Lessons learned summarize what has been gleaned during the project and can be useful for other vehicle development projects.

  12. Characterization of structural connections using free and forced response test data

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Huckelbridge, Arthur A.

    1989-01-01

    The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.

  13. Analyses of GPR signals for characterization of ground conditions in urban areas

    NASA Astrophysics Data System (ADS)

    Hong, Won-Taek; Kang, Seonghun; Lee, Sung Jin; Lee, Jong-Sub

    2018-05-01

    Ground penetrating radar (GPR) is applied for the characterization of the ground conditions in urban areas. In addition, time domain reflectometry (TDR) and dynamic cone penetrometer (DCP) tests are conducted for the accurate analyses of the GPR images. The GPR images are acquired near a ground excavation site, where a ground subsidence occurred and was repaired. Moreover, the relative permittivity and dynamic cone penetration index (DCPI) are profiled through the TDR and DCP tests, respectively. As the ground in the urban area is kept under a low-moisture condition, the relative permittivity, which is inversely related to the electromagnetic impedance, is mainly affected by the dry density and is inversely proportional to the DCPI value. Because the first strong signal in the GPR image is shifted 180° from the emitted signal, the polarity of the electromagnetic wave reflected at the dense layer, where the reflection coefficient is negative, is identical to that of the first strong signal. The temporal-scaled GPR images can be accurately converted into the spatial-scaled GPR images using the relative permittivity determined by the TDR test. The distribution of the loose layer can be accurately estimated by using the spatial-scaled GPR images and reflection characteristics of the electromagnetic wave. Note that the loose layer distribution estimated in this study matches well with the DCPI profile and is visually verified from the endoscopic images. This study demonstrates that the GPR survey complemented by the TDR and DCP tests, may be an effective method for the characterization of ground conditions in an urban area.

  14. GPU Accelerated Browser for Neuroimaging Genomics.

    PubMed

    Zigon, Bob; Li, Huang; Yao, Xiaohui; Fang, Shiaofen; Hasan, Mohammad Al; Yan, Jingwen; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2018-04-25

    Neuroimaging genomics is an emerging field that provides exciting opportunities to understand the genetic basis of brain structure and function. The unprecedented scale and complexity of the imaging and genomics data, however, have presented critical computational bottlenecks. In this work we present our initial efforts towards building an interactive visual exploratory system for mining big data in neuroimaging genomics. A GPU accelerated browsing tool for neuroimaging genomics is created that implements the ANOVA algorithm for single nucleotide polymorphism (SNP) based analysis and the VEGAS algorithm for gene-based analysis, and executes them at interactive rates. The ANOVA algorithm is 110 times faster than the 4-core OpenMP version, while the VEGAS algorithm is 375 times faster than its 4-core OpenMP counter part. This approach lays a solid foundation for researchers to address the challenges of mining large-scale imaging genomics datasets via interactive visual exploration.

  15. Accurately measuring the height of (real) forest trees

    Treesearch

    Don C. Bragg

    2014-01-01

    Quick and accurate tree height measurement has always been a goal of foresters. The techniques and technology to measure height were developed long ago—even the earliest textbooks on mensuration showcased hypsometers (e.g., Schlich 1895, Mlodziansky 1898, Schenck 1905, Graves 1906), and approaches to refine these sometimes remarkable tools appeared in the first issues...

  16. Determining site index accurately in even-aged stands

    Treesearch

    Gayne G. Erdmann; Ralph M., Jr. Peterson

    1992-01-01

    Good site index estimates are necessary for intensive forest management. To get tree age used in determining site index, increment cores are commonly used. The diffuse-porous rings of northern hardwoods, though, are difficult to count in cores, so many site index estimates are imprecise. Also, measuring the height of standing trees is more difficult and less accurate...

  17. Design Concepts, Fabrication and Advanced Characterization Methods of Innovative Piezoelectric Sensors Based on ZnO Nanowires.

    PubMed

    Araneo, Rodolfo; Rinaldi, Antonio; Notargiacomo, Andrea; Bini, Fabiano; Pea, Marialilia; Celozzi, Salvatore; Marinozzi, Franco; Lovat, Giampiero

    2014-12-08

    Micro- and nano-scale materials and systems based on zinc oxide are expected to explode in their applications in the electronics and photonics, including nano-arrays of addressable optoelectronic devices and sensors, due to their outstanding properties, including semiconductivity and the presence of a direct bandgap, piezoelectricity, pyroelectricity and biocompatibility. Most applications are based on the cooperative and average response of a large number of ZnO micro/nanostructures. However, in order to assess the quality of the materials and their performance, it is fundamental to characterize and then accurately model the specific electrical and piezoelectric properties of single ZnO structures. In this paper, we report on focused ion beam machined high aspect ratio nanowires and their mechanical and electrical (by means of conductive atomic force microscopy) characterization. Then, we investigate the suitability of new power-law design concepts to accurately model the relevant electrical and mechanical size-effects, whose existence has been emphasized in recent reviews.

  18. Design Concepts, Fabrication and Advanced Characterization Methods of Innovative Piezoelectric Sensors Based on ZnO Nanowires

    PubMed Central

    Araneo, Rodolfo; Rinaldi, Antonio; Notargiacomo, Andrea; Bini, Fabiano; Pea, Marialilia; Celozzi, Salvatore; Marinozzi, Franco; Lovat, Giampiero

    2014-01-01

    Micro- and nano-scale materials and systems based on zinc oxide are expected to explode in their applications in the electronics and photonics, including nano-arrays of addressable optoelectronic devices and sensors, due to their outstanding properties, including semiconductivity and the presence of a direct bandgap, piezoelectricity, pyroelectricity and biocompatibility. Most applications are based on the cooperative and average response of a large number of ZnO micro/nanostructures. However, in order to assess the quality of the materials and their performance, it is fundamental to characterize and then accurately model the specific electrical and piezoelectric properties of single ZnO structures. In this paper, we report on focused ion beam machined high aspect ratio nanowires and their mechanical and electrical (by means of conductive atomic force microscopy) characterization. Then, we investigate the suitability of new power-law design concepts to accurately model the relevant electrical and mechanical size-effects, whose existence has been emphasized in recent reviews. PMID:25494351

  19. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    PubMed

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  20. Fast and Accurate Resonance Assignment of Small-to-Large Proteins by Combining Automated and Manual Approaches

    PubMed Central

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A.; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628

  1. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  2. Accurate determination of selected pesticides in soya beans by liquid chromatography coupled to isotope dilution mass spectrometry.

    PubMed

    Huertas Pérez, J F; Sejerøe-Olsen, B; Fernández Alba, A R; Schimmel, H; Dabrio, M

    2015-05-01

    A sensitive, accurate and simple liquid chromatography coupled with mass spectrometry method for the determination of 10 selected pesticides in soya beans has been developed and validated. The method is intended for use during the characterization of selected pesticides in a reference material. In this process, high accuracy and appropriate uncertainty levels associated to the analytical measurements are of utmost importance. The analytical procedure is based on sample extraction by the use of a modified QuEChERS (quick, easy, cheap, effective, rugged, safe) extraction and subsequent clean-up of the extract with C18, PSA and Florisil. Analytes were separated on a C18 column using gradient elution with water-methanol/2.5 mM ammonium acetate mobile phase, and finally identified and quantified by triple quadrupole mass spectrometry in the multiple reaction monitoring mode (MRM). Reliable and accurate quantification of the analytes was achieved by means of stable isotope-labelled analogues employed as internal standards (IS) and calibration with pure substance solutions containing both, the isotopically labelled and native compounds. Exceptions were made for thiodicarb and malaoxon where the isotopically labelled congeners were not commercially available at the time of analysis. For the quantification of those compounds methomyl-(13)C2(15)N and malathion-D10 were used respectively. The method was validated according to the general principles covered by DG SANCO guidelines. However, validation criteria were set more stringently. Mean recoveries were in the range of 86-103% with RSDs lower than 8.1%. Repeatability and intermediate precision were in the range of 3.9-7.6% and 1.9-8.7% respectively. LODs were theoretically estimated and experimentally confirmed to be in the range 0.001-0.005 mg kg(-1) in the matrix, while LOQs established as the lowest spiking mass fractionation level were in the range 0.01-0.05 mg kg(-1). The method reliably identifies and quantifies the

  3. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    PubMed

    Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford

    2010-07-08

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  4. Learning a Weighted Sequence Model of the Nucleosome Core and Linker Yields More Accurate Predictions in Saccharomyces cerevisiae and Homo sapiens

    PubMed Central

    Reynolds, Sheila M.; Bilmes, Jeff A.; Noble, William Stafford

    2010-01-01

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence—301 base pairs, centered at the position to be scored—with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  5. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  6. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less

  7. Hierarchical resilience with lightweight threads.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, Kyle Bruce

    2011-10-01

    This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less

  8. Can blind persons accurately assess body size from the voice?

    PubMed

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).

  9. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  10. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  11. Can blind persons accurately assess body size from the voice?

    PubMed Central

    Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-01-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20–65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  12. The diagnostic capability of laser induced fluorescence in the characterization of excised breast tissues

    NASA Astrophysics Data System (ADS)

    Galmed, A. H.; Elshemey, Wael M.

    2017-08-01

    Differentiating between normal, benign and malignant excised breast tissues is one of the major worldwide challenges that need a quantitative, fast and reliable technique in order to avoid personal errors in diagnosis. Laser induced fluorescence (LIF) is a promising technique that has been applied for the characterization of biological tissues including breast tissue. Unfortunately, only few studies have adopted a quantitative approach that can be directly applied for breast tissue characterization. This work provides a quantitative means for such characterization via introduction of several LIF characterization parameters and determining the diagnostic accuracy of each parameter in the differentiation between normal, benign and malignant excised breast tissues. Extensive analysis on 41 lyophilized breast samples using scatter diagrams, cut-off values, diagnostic indices and receiver operating characteristic (ROC) curves, shows that some spectral parameters (peak height and area under the peak) are superior for characterization of normal, benign and malignant breast tissues with high sensitivity (up to 0.91), specificity (up to 0.91) and accuracy ranking (highly accurate).

  13. Evaluating ASTER satellite imagery and gradient modeling for mapping and characterizing wildland fire fuels

    Treesearch

    Michael J. Falkowski; Paul Gessler; Penelope Morgan; Alistair M. S. Smith; Andrew T. Hudak

    2004-01-01

    Land managers need cost-effective methods for mapping and characterizing fire fuels quickly and accurately. The advent of sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the Advanced Spaceborne Thermal Emission and Reflection...

  14. Evaluating the ASTER sensor for mapping and characterizing forest fire fuels in northern Idaho

    Treesearch

    Michael J. Falkowski; Paul Gessler; Penelope Morgan; Alistair M. S. Smith; Andrew T. Hudak

    2004-01-01

    Land managers need cost-effective methods for mapping and characterizing fire fuels quickly and accurately. The advent of sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the Advanced Spaceborne Thermal Emission and Reflection...

  15. Experimental Identification and Characterization of Multirotor UAV Propulsion

    NASA Astrophysics Data System (ADS)

    Kotarski, Denis; Krznar, Matija; Piljek, Petar; Simunic, Nikola

    2017-07-01

    In this paper, an experimental procedure for the identification and characterization of multirotor Unmanned Aerial Vehicle (UAV) propulsion is presented. Propulsion configuration needs to be defined precisely in order to achieve required flight performance. Based on the accurate dynamic model and empirical measurements of multirotor propulsion physical parameters, it is possible to design diverse configurations with different characteristics for various purposes. As a case study, we investigated design considerations for a micro indoor multirotor which is suitable for control algorithm implementation in structured environment. It consists of open source autopilot, sensors for indoor flight, “take off the shelf” propulsion components and frame. The series of experiments were conducted to show the process of parameters identification and the procedure for analysis and propulsion characterization. Additionally, we explore battery performance in terms of mass and specific energy. Experimental results show identified and estimated propulsion parameters through which blade element theory is verified.

  16. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  17. Accurate Identification of Fear Facial Expressions Predicts Prosocial Behavior

    PubMed Central

    Marsh, Abigail A.; Kozak, Megan N.; Ambady, Nalini

    2009-01-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will behave more prosocially. In Study 1, participants who identified fear more accurately also donated more money and time to a victim in a classic altruism paradigm. In Studies 2 and 3, participants’ ability to identify the fear expression predicted prosocial behavior in a novel task designed to control for confounding variables. In Study 3, accuracy for recognizing fear proved a better predictor of prosocial behavior than gender, mood, or scores on an empathy scale. PMID:17516803

  18. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  19. Accurate identification of fear facial expressions predicts prosocial behavior.

    PubMed

    Marsh, Abigail A; Kozak, Megan N; Ambady, Nalini

    2007-05-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will behave more prosocially. In Study 1, participants who identified fear more accurately also donated more money and time to a victim in a classic altruism paradigm. In Studies 2 and 3, participants' ability to identify the fear expression predicted prosocial behavior in a novel task designed to control for confounding variables. In Study 3, accuracy for recognizing fear proved a better predictor of prosocial behavior than gender, mood, or scores on an empathy scale.

  20. The CC/DFT Route towards Accurate Structures and Spectroscopic Features for Observed and Elusive Conformers of Flexible Molecules: Pyruvic Acid as Case Study

    PubMed Central

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Cimino, Paola; Penocchio, Emanuele; Puzzarini, Cristina

    2018-01-01

    The structures, relative stabilities as well as the rotational and vibrational spectra of the three low-energy conformers of Pyruvic acid (PA) have been characterized using a state-of-the-art quantum-mechanical approach designed for flexible molecules. By making use of the available experimental rotational constants for several isotopologues of the most stable PA conformer, Tc-PA, the semi-experimental equilibrium structure has been derived. The latter provides a reference for the pure theoretical determination of the equilibrium geometries for all conformers, thus confirming for these structures an accuracy of 0.001 Å and 0.1 deg. for bond lengths and angles, respectively. Highly accurate relative energies of all conformers (Tc-, Tt- and Ct-PA) and of the transition states connecting them are provided along with the thermodynamic properties at low and high temperatures, thus leading to conformational enthalpies accurate to 1 kJ mol−1. Concerning microwave spectroscopy, rotational constants accurate to about 20 MHz are provided for the Tt- and Ct-PA conformers, together with the computed centrifugal-distortion constants and dipole moments required to simulate their rotational spectra. For Ct-PA, vibrational frequencies in the mid-infrared region accurate to 10 cm−1 are reported along with theoretical estimates for the transitions in the near-infrared range, and the corresponding infrared spectrum including fundamental transitions, overtones and combination bands has been simulated. In addition to the new data described above, theoretical results for the Tc- and Tt-PA conformers are compared with all available experimental data to further confirm the accuracy of the hybrid coupled-cluster/density functional theory (CC/DFT) protocol applied in the present study. Finally, we discuss in detail the accuracy of computational models fully based on double-hybrid DFT functionals (mainly at the B2PLYP/aug-cc-pVTZ level) that avoid the use of very expensive CC

  1. A photogrammetric technique for generation of an accurate multispectral optical flow dataset

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2017-06-01

    A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.

  2. Modeling and characterization of partially inserted electrical connector faults

    NASA Astrophysics Data System (ADS)

    Tokgöz, ćaǧatay; Dardona, Sameh; Soldner, Nicholas C.; Wheeler, Kevin R.

    2016-03-01

    Faults within electrical connectors are prominent in avionics systems due to improper installation, corrosion, aging, and strained harnesses. These faults usually start off as undetectable with existing inspection techniques and increase in magnitude during the component lifetime. Detection and modeling of these faults are significantly more challenging than hard failures such as open and short circuits. Hence, enabling the capability to locate and characterize the precursors of these faults is critical for timely preventive maintenance and mitigation well before hard failures occur. In this paper, an electrical connector model based on a two-level nonlinear least squares approach is proposed. The connector is first characterized as a transmission line, broken into key components such as the pin, socket, and connector halves. Then, the fact that the resonance frequencies of the connector shift as insertion depth changes from a fully inserted to a barely touching contact is exploited. The model precisely captures these shifts by varying only two length parameters. It is demonstrated that the model accurately characterizes a partially inserted connector.

  3. Accurate assessment and identification of naturally occurring cellular cobalamins.

    PubMed

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V; Moreira, Edward S; Brasch, Nicola E; Jacobsen, Donald W

    2008-01-01

    Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo beta-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Experiments were designed to: 1) assess beta-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable beta-axial ligands. The cobalamin profile of cells grown in the presence of [ 57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [ 57Co]-aquacobalamin, [ 57Co]-glutathionylcobalamin, [ 57Co]-sulfitocobalamin, [ 57Co]-cyanocobalamin, [ 57Co]-adenosylcobalamin, [ 57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalaminacting as a scavenger cobalamin (i.e. "cold trapping"), the recovery of both [ 57Co]-glutathionylcobalamin and [ 57Co]-sulfitocobalamin decreases to low but consistent levels. In contrasts, the [ 57Co]-nitrocobalamin observed in the extracts prepared without excess aquacobalamin is undetected in extracts prepared with cold trapping. This demonstrates that beta-ligand exchange occur with non-covalently bound beta-ligands. The exception to this observation is cyanocobalamin with a non-exchangeable CN- group. It is now possible to obtain accurate profiles of cellular cobalamin.

  4. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  5. Data Race Benchmark Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Chunhua; Lin, Pei-Hung; Asplund, Joshua

    2017-03-21

    This project is a benchmark suite of Open-MP parallel codes that have been checked for data races. The programs are marked to show which do and do not have races. This allows them to be leveraged while testing and developing race detection tools.

  6. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  7. Tissue resonance interaction accurately detects colon lesions: A double-blind pilot study.

    PubMed

    Dore, Maria P; Tufano, Marcello O; Pes, Giovanni M; Cuccu, Marianna; Farina, Valentina; Manca, Alessandra; Graham, David Y

    2015-07-07

    To investigated the performance of the tissue resonance interaction method (TRIM) for the non-invasive detection of colon lesions. We performed a prospective single-center blinded pilot study of consecutive adults undergoing colonoscopy at the University Hospital in Sassari, Italy. Before patients underwent colonoscopy, they were examined by the TRIMprobe which detects differences in electromagnetic properties between pathological and normal tissues. All patients had completed the polyethylene glycol-containing bowel prep for the colonoscopy procedure before being screened. During the procedure the subjects remained fully dressed. A hand-held probe was moved over the abdomen and variations in electromagnetic signals were recorded for 3 spectral lines (462-465 MHz, 930 MHz, and 1395 MHz). A single investigator, blind to any clinical information, performed the test using the TRIMprob system. Abnormal signals were identified and recorded as malignant or benign (adenoma or hyperplastic polyps). Findings were compared with those from colonoscopy with histologic confirmation. Statistical analysis was performed by χ(2) test. A total of 305 consecutive patients fulfilling the inclusion criteria were enrolled over a period of 12 months. The most frequent indication for colonoscopy was abdominal pain (33%). The TRIMprob was well accepted by all patients; none spontaneously complained about the procedure, and no adverse effects were observed. TRIM proved inaccurate for polyp detection in patients with inflammatory bowel disease (IBD) and they were excluded leaving 281 subjects (mean age 59 ± 13 years; 107 males). The TRIM detected and accurately characterized all 12 adenocarcinomas and 135/137 polyps (98.5%) including 64 adenomatous (100%) found. The method identified cancers and polyps with 98.7% sensitivity, 96.2% specificity, and 97.5% diagnostic accuracy, compared to colonoscopy and histology analyses. The positive predictive value was 96.7% and the negative predictive

  8. Tissue resonance interaction accurately detects colon lesions: A double-blind pilot study

    PubMed Central

    Dore, Maria P; Tufano, Marcello O; Pes, Giovanni M; Cuccu, Marianna; Farina, Valentina; Manca, Alessandra; Graham, David Y

    2015-01-01

    AIM: To investigated the performance of the tissue resonance interaction method (TRIM) for the non-invasive detection of colon lesions. METHODS: We performed a prospective single-center blinded pilot study of consecutive adults undergoing colonoscopy at the University Hospital in Sassari, Italy. Before patients underwent colonoscopy, they were examined by the TRIMprobe which detects differences in electromagnetic properties between pathological and normal tissues. All patients had completed the polyethylene glycol-containing bowel prep for the colonoscopy procedure before being screened. During the procedure the subjects remained fully dressed. A hand-held probe was moved over the abdomen and variations in electromagnetic signals were recorded for 3 spectral lines (462-465 MHz, 930 MHz, and 1395 MHz). A single investigator, blind to any clinical information, performed the test using the TRIMprob system. Abnormal signals were identified and recorded as malignant or benign (adenoma or hyperplastic polyps). Findings were compared with those from colonoscopy with histologic confirmation. Statistical analysis was performed by χ2 test. RESULTS: A total of 305 consecutive patients fulfilling the inclusion criteria were enrolled over a period of 12 months. The most frequent indication for colonoscopy was abdominal pain (33%). The TRIMprob was well accepted by all patients; none spontaneously complained about the procedure, and no adverse effects were observed. TRIM proved inaccurate for polyp detection in patients with inflammatory bowel disease (IBD) and they were excluded leaving 281 subjects (mean age 59 ± 13 years; 107 males). The TRIM detected and accurately characterized all 12 adenocarcinomas and 135/137 polyps (98.5%) including 64 adenomatous (100%) found. The method identified cancers and polyps with 98.7% sensitivity, 96.2% specificity, and 97.5% diagnostic accuracy, compared to colonoscopy and histology analyses. The positive predictive value was 96.7% and the

  9. Second-order accurate nonoscillatory schemes for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  10. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  11. A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.

    PubMed

    Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei

    2017-12-01

    The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  13. Methodological Guidelines for Accurate Detection of Viruses in Wild Plant Species

    PubMed Central

    Renner, Kurra; Cole, Ellen; Seabloom, Eric W.; Borer, Elizabeth T.; Malmstrom, Carolyn M.

    2016-01-01

    Ecological understanding of disease risk, emergence, and dynamics and of the efficacy of control strategies relies heavily on efficient tools for microorganism identification and characterization. Misdetection, such as the misclassification of infected hosts as healthy, can strongly bias estimates of disease prevalence and lead to inaccurate conclusions. In natural plant ecosystems, interest in assessing microbial dynamics is increasing exponentially, but guidelines for detection of microorganisms in wild plants remain limited, particularly so for plant viruses. To address this gap, we explored issues and solutions associated with virus detection by serological and molecular methods in noncrop plant species as applied to the globally important Barley yellow dwarf virus PAV (Luteoviridae), which infects wild native plants as well as crops. With enzyme-linked immunosorbent assays (ELISA), we demonstrate how virus detection in a perennial wild plant species may be much greater in stems than in leaves, although leaves are most commonly sampled, and may also vary among tillers within an individual, thereby highlighting the importance of designing effective sampling strategies. With reverse transcription-PCR (RT-PCR), we demonstrate how inhibitors in tissues of perennial wild hosts can suppress virus detection but can be overcome with methods and products that improve isolation and amplification of nucleic acids. These examples demonstrate the paramount importance of testing and validating survey designs and virus detection methods for noncrop plant communities to ensure accurate ecological surveys and reliable assumptions about virus dynamics in wild hosts. PMID:26773088

  14. Incorporation of Fiber Bragg Sensors for Shape Memory Polyurethanes Characterization.

    PubMed

    Alberto, Nélia; Fonseca, Maria A; Neto, Victor; Nogueira, Rogério; Oliveira, Mónica; Moreira, Rui

    2017-11-11

    Shape memory polyurethanes (SMPUs) are thermally activated shape memory materials, which can be used as actuators or sensors in applications including aerospace, aeronautics, automobiles or the biomedical industry. The accurate characterization of the memory effect of these materials is therefore mandatory for the technology's success. The shape memory characterization is normally accomplished using mechanical testing coupled with a heat source, where a detailed knowledge of the heat cycle and its influence on the material properties is paramount but difficult to monitor. In this work, fiber Bragg grating (FBG) sensors were embedded into SMPU samples aiming to study and characterize its shape memory effect. The samples were obtained by injection molding, and the entire processing cycle was successfully monitored, providing a process global quality signature. Moreover, the integrity and functionality of the FBG sensors were maintained during and after the embedding process, demonstrating the feasibility of the technology chosen for the purpose envisaged. The results of the shape memory effect characterization demonstrate a good correlation between the reflected FBG peak with the temperature and induced strain, proving that this technology is suitable for this particular application.

  15. Modeling and characterization of supercapacitors for wireless sensor network applications

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Yang, Hengzhao

    A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance to characterize the self-discharge process. The parameter values of a supercapacitor can be determined by a charging-redistribution experiment and a self-discharge experiment. The modeling and characterization procedures are illustrated using a 22F supercapacitor. The accuracy of the model is compared with that of other models often used in power electronics applications. The results show that the proposed model has better accuracy in characterizing the self-discharge process while maintaining similar performance as other models during charging and redistribution processes. Additionally, the proposed model is evaluated in a simplified energy storage system for self-powered wireless sensors. The model performance is compared with that of a commonly used energy recursive equation (ERE) model. The results demonstrate that the proposed model can predict the evolution profile of voltage across the supercapacitor more accurately than the ERE model, and therefore provides a better alternative for supporting research on storage system design and power management for wireless sensor networks.

  16. Incorporation of Fiber Bragg Sensors for Shape Memory Polyurethanes Characterization

    PubMed Central

    Nogueira, Rogério; Moreira, Rui

    2017-01-01

    Shape memory polyurethanes (SMPUs) are thermally activated shape memory materials, which can be used as actuators or sensors in applications including aerospace, aeronautics, automobiles or the biomedical industry. The accurate characterization of the memory effect of these materials is therefore mandatory for the technology’s success. The shape memory characterization is normally accomplished using mechanical testing coupled with a heat source, where a detailed knowledge of the heat cycle and its influence on the material properties is paramount but difficult to monitor. In this work, fiber Bragg grating (FBG) sensors were embedded into SMPU samples aiming to study and characterize its shape memory effect. The samples were obtained by injection molding, and the entire processing cycle was successfully monitored, providing a process global quality signature. Moreover, the integrity and functionality of the FBG sensors were maintained during and after the embedding process, demonstrating the feasibility of the technology chosen for the purpose envisaged. The results of the shape memory effect characterization demonstrate a good correlation between the reflected FBG peak with the temperature and induced strain, proving that this technology is suitable for this particular application. PMID:29137136

  17. Improved Phase Characterization of Far-Regional Body Wave Arrivals in Central Asia

    DTIC Science & Technology

    2008-09-30

    developing array -based methods that can more accurately characterize far-regional (14*-29*) seismic wavefield structure. Far- regional (14*-29*) seismograms...arrivals with the primary arrivals. These complexities can be region and earthquake specific. The regional seismic arrays that have been built in the last...fifteen years should be a rich data source for the study of far-regional phase behavior. The arrays are composed of high-quality borehole seismometers

  18. A Parallel Stochastic Framework for Reservoir Characterization and History Matching

    DOE PAGES

    Thomas, Sunil G.; Klie, Hector M.; Rodriguez, Adolfo A.; ...

    2011-01-01

    The spatial distribution of parameters that characterize the subsurface is never known to any reasonable level of accuracy required to solve the governing PDEs of multiphase flow or species transport through porous media. This paper presents a numerically cheap, yet efficient, accurate and parallel framework to estimate reservoir parameters, for example, medium permeability, using sensor information from measurements of the solution variables such as phase pressures, phase concentrations, fluxes, and seismic and well log data. Numerical results are presented to demonstrate the method.

  19. XPS Protocol for the Characterization of Pristine and Functionalized Single Wall Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Sosa, E. D.; Allada, R.; Huffman, C. B.; Arepalli, S.

    2009-01-01

    Recent interest in developing new applications for carbon nanotubes (CNT) has fueled the need to use accurate macroscopic and nanoscopic techniques to characterize and understand their chemistry. X-ray photoelectron spectroscopy (XPS) has proved to be a useful analytical tool for nanoscale surface characterization of materials including carbon nanotubes. Recent nanotechnology research at NASA Johnson Space Center (NASA-JSC) helped to establish a characterization protocol for quality assessment for single wall carbon nanotubes (SWCNTs). Here, a review of some of the major factors of the XPS technique that can influence the quality of analytical data, suggestions for methods to maximize the quality of data obtained by XPS, and the development of a protocol for XPS characterization as a complementary technique for analyzing the purity and surface characteristics of SWCNTs is presented. The XPS protocol is then applied to a number of experiments including impurity analysis and the study of chemical modifications for SWCNTs.

  20. Accurate van der Waals coefficients from density functional theory

    PubMed Central

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  1. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  2. Fast and accurate edge orientation processing during object manipulation

    PubMed Central

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  3. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  4. Automatic Fault Characterization via Abnormality-Enhanced Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  5. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  6. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  7. Toward an accurate taxonomic interpretation of Carex fossil fruits (Cyperaceae): a case study in section Phacocystis in the Western Palearctic.

    PubMed

    Jiménez-Mejías, Pedro; Martinetto, Edoardo

    2013-08-01

    Despite growing interest in the systematics and evolution of the hyperdiverse genus Carex, few studies have focused on its evolution using an absolute time framework. This is partly due to the limited knowledge of the fossil record. However, Carex fruits are not rare in certain sediments. We analyzed carpological features of modern materials from Carex sect. Phacocystis to characterize the fossil record taxonomically. We studied 374 achenes from modern materials (18 extant species), as well as representatives from related groups, to establish the main traits within and among species. We also studied 99 achenes from sediments of living populations to assess their modification process after decay. Additionally, we characterized 145 fossil achenes from 10 different locations (from 4-0.02 mya), whose taxonomic assignment we discuss. Five main characters were identified for establishing morphological groups of species (epidermis morphology, achene-utricle attachment, achene base, style robustness, and pericarp section). Eleven additional characters allowed the discrimination at species level of most of the taxa. Fossil samples were assigned to two extant species and one unknown, possibly extinct species. The analysis of fruit characters allows the distinction of groups, even up to species level. Carpology is revealed as an accurate tool in Carex paleotaxonomy, which could allow the characterization of Carex fossil fruits and assign them to subgeneric or sectional categories, or to certain species. Our conclusions could be crucial for including a temporal framework in the study of the evolution of Carex.

  8. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    NASA Astrophysics Data System (ADS)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  9. White Light Used to Enable Enhanced Surface Topography, Geometry, and Wear Characterization of Oil-Free Bearings

    NASA Technical Reports Server (NTRS)

    Lucero, John M.

    2003-01-01

    A new optically based measuring capability that characterizes surface topography, geometry, and wear has been employed by NASA Glenn Research Center s Tribology and Surface Science Branch. To characterize complex parts in more detail, we are using a three-dimensional, surface structure analyzer-the NewView5000 manufactured by Zygo Corporation (Middlefield, CT). This system provides graphical images and high-resolution numerical analyses to accurately characterize surfaces. Because of the inherent complexity of the various analyzed assemblies, the machine has been pushed to its limits. For example, special hardware fixtures and measuring techniques were developed to characterize Oil- Free thrust bearings specifically. We performed a more detailed wear analysis using scanning white light interferometry to image and measure the bearing structure and topography, enabling a further understanding of bearing failure causes.

  10. Improvements to III-nitride light-emitting diodes through characterization and material growth

    NASA Astrophysics Data System (ADS)

    Getty, Amorette Rose Klug

    A variety of experiments were conducted to improve or aid the improvement of the efficiency of III-nitride light-emitting diodes (LEDs), which are a critical area of research for multiple applications, including high-efficiency solid state lighting. To enhance the light extraction in ultraviolet LEDs grown on SiC substrates, a distributed Bragg reflector (DBR) optimized for operation in the range from 250 to 280 nm has been developed using MBE growth techniques. The best devices had a peak reflectivity of 80% with 19.5 periods, which is acceptable for the intended application. DBR surfaces were sufficiently smooth for subsequent epitaxy of the LED device. During the course of this work, pros and cons of AlGaN growth techniques, including analog versus digital alloying, were examined. This work highlighted a need for more accurate values of the refractive index of high-Al-content AlxGa1-xNin the UV wavelength range. We present refractive index results for a wide variety of materials pertinent to the fabrication of optical III-nitride devices. Characterization was done using Variable-Angle Spectroscopic Ellipsometry. The three binary nitrides, and all three ternaries, have been characterized to a greater or lesser extent depending on material compositions available. Semi-transparent p-contact materials and other thin metals for reflecting contacts have been examined to allow optimization of deposition conditions and to allow highly accurate modeling of the behavior of light within these devices. Standard substrate materials have also been characterized for completeness and as an indicator of the accuracy of our modeling technique. We have demonstrated a new technique for estimating the internal quantum efficiency (IQE) of nitride light-emitting diodes. This method is advantageous over the standard low-temperature photoluminescence-based method of estimating IQE, as the new method is conducted under the same conditions as normal device operation. We have developed

  11. Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources

    DOEpatents

    Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA

    2007-03-13

    A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  12. Nonlinear characterization of elasticity using quantitative optical coherence elastography.

    PubMed

    Qiu, Yi; Zaki, Farzana R; Chandra, Namas; Chester, Shawn A; Liu, Xuan

    2016-11-01

    Optical coherence elastography (OCE) has been used to perform mechanical characterization on biological tissue at the microscopic scale. In this work, we used quantitative optical coherence elastography (qOCE), a novel technology we recently developed, to study the nonlinear elastic behavior of biological tissue. The qOCE system had a fiber-optic probe to exert a compressive force to deform tissue under the tip of the probe. Using the space-division multiplexed optical coherence tomography (OCT) signal detected by a spectral domain OCT engine, we were able to simultaneously quantify the probe deformation that was proportional to the force applied, and to quantify the tissue deformation. In other words, our qOCE system allowed us to establish the relationship between mechanical stimulus and tissue response to characterize the stiffness of biological tissue. Most biological tissues have nonlinear elastic behavior, and the apparent stress-strain relationship characterized by our qOCE system was nonlinear an extended range of strain, for a tissue-mimicking phantom as well as biological tissues. Our experimental results suggested that the quantification of force in OCE was critical for accurate characterization of tissue mechanical properties and the qOCE technique was capable of differentiating biological tissues based on the elasticity of tissue that is generally nonlinear.

  13. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  14. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  15. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    NASA Astrophysics Data System (ADS)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  16. Using Virtual Testing for Characterization of Composite Materials

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph

    Composite materials are finally providing uses hitherto reserved for metals in structural systems applications -- airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. They have high strength-to-weight ratios, are durable and resistant to environmental effects, have high impact strength, and can be manufactured in a variety of shapes. Generalized constitutive models are being developed to accurately model composite systems so they can be used in implicit and explicit finite element analysis. These models require extensive characterization of the composite material as input. The particular constitutive model of interest for this research is a three-dimensional orthotropic elasto-plastic composite material model that requires a total of 12 experimental stress-strain curves, yield stresses, and Young's Modulus and Poisson's ratio in the material directions as input. Sometimes it is not possible to carry out reliable experimental tests needed to characterize the composite material. One solution is using virtual testing to fill the gaps in available experimental data. A Virtual Testing Software System (VTSS) has been developed to address the need for a less restrictive method to characterize a three-dimensional orthotropic composite material. The system takes in the material properties of the constituents and completes all 12 of the necessary characterization tests using finite element (FE) models. Verification and validation test cases demonstrate the capabilities of the VTSS.

  17. Optical characterization of tissue mimicking phantoms by a vertical double integrating sphere system

    NASA Astrophysics Data System (ADS)

    Han, Yilin; Jia, Qiumin; Shen, Shuwei; Liu, Guangli; Guo, Yuwei; Zhou, Ximing; Chu, Jiaru; Zhao, Gang; Dong, Erbao; Allen, David W.; Lemaillet, Paul; Xu, Ronald

    2016-03-01

    Accurate characterization of absorption and scattering properties for biologic tissue and tissue-simulating materials enables 3D printing of traceable tissue-simulating phantoms for medical spectral device calibration and standardized medical optical imaging. Conventional double integrating sphere systems have several limitations and are suboptimal for optical characterization of liquid and soft materials used in 3D printing. We propose a vertical double integrating sphere system and the associated reconstruction algorithms for optical characterization of phantom materials that simulate different human tissue components. The system characterizes absorption and scattering properties of liquid and solid phantom materials in an operating wavelength range from 400 nm to 1100 nm. Absorption and scattering properties of the phantoms are adjusted by adding titanium dioxide powder and India ink, respectively. Different material compositions are added in the phantoms and characterized by the vertical double integrating sphere system in order to simulate the human tissue properties. Our test results suggest that the vertical integrating sphere system is able to characterize optical properties of tissue-simulating phantoms without precipitation effect of the liquid samples or wrinkling effect of the soft phantoms during the optical measurement.

  18. Accurate prediction of collapse temperature using optical coherence tomography-based freeze-drying microscopy.

    PubMed

    Greco, Kristyn; Mujat, Mircea; Galbally-Kinney, Kristin L; Hammer, Daniel X; Ferguson, R Daniel; Iftimia, Nicusor; Mulhall, Phillip; Sharma, Puneet; Kessler, William J; Pikal, Michael J

    2013-06-01

    The objective of this study was to assess the feasibility of developing and applying a laboratory tool that can provide three-dimensional product structural information during freeze-drying and which can accurately characterize the collapse temperature (Tc ) of pharmaceutical formulations designed for freeze-drying. A single-vial freeze dryer coupled with optical coherence tomography freeze-drying microscopy (OCT-FDM) was developed to investigate the structure and Tc of formulations in pharmaceutically relevant products containers (i.e., freeze-drying in vials). OCT-FDM was used to measure the Tc and eutectic melt of three formulations in freeze-drying vials. The Tc as measured by OCT-FDM was found to be predictive of freeze-drying with a batch of vials in a conventional laboratory freeze dryer. The freeze-drying cycles developed using OCT-FDM data, as compared with traditional light transmission freeze-drying microscopy (LT-FDM), resulted in a significant reduction in primary drying time, which could result in a substantial reduction of manufacturing costs while maintaining product quality. OCT-FDM provides quantitative data to justify freeze-drying at temperatures higher than the Tc measured by LT-FDM and provides a reliable upper limit to setting a product temperature in primary drying. Copyright © 2013 Wiley Periodicals, Inc.

  19. Accurate sub-millimetre rest frequencies for HOCO+ and DOCO+ ions

    NASA Astrophysics Data System (ADS)

    Bizzocchi, L.; Lattanzi, V.; Laas, J.; Spezzano, S.; Giuliano, B. M.; Prudenzano, D.; Endres, C.; Sipilä, O.; Caselli, P.

    2017-06-01

    Context. HOCO+ is a polar molecule that represents a useful proxy for its parent molecule CO2, which is not directly observable in the cold interstellar medium. This cation has been detected towards several lines of sight, including massive star forming regions, protostars, and cold cores. Despite the obvious astrochemical relevance, protonated CO2 and its deuterated variant, DOCO+, still lack an accurate spectroscopic characterisation. Aims: The aim of this work is to extend the study of the ground-state pure rotational spectra of HOCO+ and DOCO+ well into the sub-millimetre region. Methods: Ground-state transitions have been recorded in the laboratory using a frequency-modulation absorption spectrometer equipped with a free-space glow-discharge cell. The ions were produced in a low-density, magnetically confined plasma generated in a suitable gas mixture. The ground-state spectra of HOCO+ and DOCO+ have been investigated in the 213-967 GHz frequency range; 94 new rotational transitions have been detected. Additionally, 46 line positions taken from the literature have been accurately remeasured. Results: The newly measured lines have significantly enlarged the available data sets for HOCO+ and DOCO+, thus enabling the determination of highly accurate rotational and centrifugal distortion parameters. Our analysis shows that all HOCO+ lines with Ka ≥ 3 are perturbed by a ro-vibrational interaction that couples the ground state with the v5 = 1 vibrationally excited state. This resonance has been explicitly treated in the analysis in order to obtain molecular constants with clear physical meaning. Conclusions: The improved sets of spectroscopic parameters provide enhanced lists of very accurate sub-millimetre rest frequencies of HOCO+ and DOCO+ for astrophysical applications. These new data challenge a recent tentative identification of DOCO+ towards a pre-stellar core. Supplementary tables are only available at the CDS via anonymous ftp to http

  20. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  1. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  2. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  3. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  4. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  5. Initial characterization of unidentified Armillaria isolate from Serbia using LSU-IGS1 and TEF-1a genes

    Treesearch

    N. Keca; N. B. Klopfenstein; M.-S. Kim; H. Solheim; S. Woodward

    2014-01-01

    Armillaria species have a global distribution and play variable ecological roles, including causing root disease of diverse forest, ornamental and horticultural trees. Accurate identification of Armillaria species is critical to understand their distribution and ecological roles. This work focused on characterizing an unidentified Armillaria isolate from a Serbian...

  6. Stable and Spectrally Accurate Schemes for the Navier-Stokes Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Jun; Liu, Jie

    2011-01-01

    In this paper, we present an accurate, efficient and stable numerical method for the incompressible Navier-Stokes equations (NSEs). The method is based on (1) an equivalent pressure Poisson equation formulation of the NSE with proper pressure boundary conditions, which facilitates the design of high-order and stable numerical methods, and (2) the Krylov deferred correction (KDC) accelerated method of lines transpose (mbox MoL{sup T}), which is very stable, efficient, and of arbitrary order in time. Numerical tests with known exact solutions in three dimensions show that the new method is spectrally accurate in time, and a numerical order of convergence 9more » was observed. Two-dimensional computational results of flow past a cylinder and flow in a bifurcated tube are also reported.« less

  7. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  8. Accurate HLA type inference using a weighted similarity graph.

    PubMed

    Xie, Minzhu; Li, Jing; Jiang, Tao

    2010-12-14

    The human leukocyte antigen system (HLA) contains many highly variable genes. HLA genes play an important role in the human immune system, and HLA gene matching is crucial for the success of human organ transplantations. Numerous studies have demonstrated that variation in HLA genes is associated with many autoimmune, inflammatory and infectious diseases. However, typing HLA genes by serology or PCR is time consuming and expensive, which limits large-scale studies involving HLA genes. Since it is much easier and cheaper to obtain single nucleotide polymorphism (SNP) genotype data, accurate computational algorithms to infer HLA gene types from SNP genotype data are in need. To infer HLA types from SNP genotypes, the first step is to infer SNP haplotypes from genotypes. However, for the same SNP genotype data set, the haplotype configurations inferred by different methods are usually inconsistent, and it is often difficult to decide which one is true. In this paper, we design an accurate HLA gene type inference algorithm by utilizing SNP genotype data from pedigrees, known HLA gene types of some individuals and the relationship between inferred SNP haplotypes and HLA gene types. Given a set of haplotypes inferred from the genotypes of a population consisting of many pedigrees, the algorithm first constructs a weighted similarity graph based on a new haplotype similarity measure and derives constraint edges from known HLA gene types. Based on the principle that different HLA gene alleles should have different background haplotypes, the algorithm searches for an optimal labeling of all the haplotypes with unknown HLA gene types such that the total weight among the same HLA gene types is maximized. To deal with ambiguous haplotype solutions, we use a genetic algorithm to select haplotype configurations that tend to maximize the same optimization criterion. Our experiments on a previously typed subset of the HapMap data show that the algorithm is highly accurate

  9. Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.

    2012-01-01

    The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.

  10. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  11. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    USDA-ARS?s Scientific Manuscript database

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  12. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can bemore » rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.« less

  14. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    DOE PAGES

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; ...

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can bemore » rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.« less

  15. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory.

    PubMed

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  16. Current status of accurate prognostic awareness in advanced/terminally ill cancer patients: Systematic review and meta-regression analysis.

    PubMed

    Chen, Chen Hsiu; Kuo, Su Ching; Tang, Siew Tzuh

    2017-05-01

    No systematic meta-analysis is available on the prevalence of cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. To examine the prevalence of advanced/terminal cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. Systematic review and meta-analysis. MEDLINE, Embase, The Cochrane Library, CINAHL, and PsycINFO were systematically searched on accurate prognostic awareness in adult patients with advanced/terminal cancer (1990-2014). Pooled prevalences were calculated for accurate prognostic awareness by a random-effects model. Differences in weighted estimates of accurate prognostic awareness were compared by meta-regression. In total, 34 articles were retrieved for systematic review and meta-analysis. At best, only about half of advanced/terminal cancer patients accurately understood their prognosis (49.1%; 95% confidence interval: 42.7%-55.5%; range: 5.4%-85.7%). Accurate prognostic awareness was independent of service received and publication year, but highest in Australia, followed by East Asia, North America, and southern Europe and the United Kingdom (67.7%, 60.7%, 52.8%, and 36.0%, respectively; p = 0.019). Accurate prognostic awareness was higher by clinician assessment than by patient report (63.2% vs 44.5%, p < 0.001). Less than half of advanced/terminal cancer patients accurately understood their prognosis, with significant variations by region and assessment method. Healthcare professionals should thoroughly assess advanced/terminal cancer patients' preferences for prognostic information and engage them in prognostic discussion early in the cancer trajectory, thus facilitating their accurate prognostic awareness and the quality of end-of-life care decision-making.

  17. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  18. Methodology for Formulating Diesel Surrogate Fuels with Accurate Compositional, Ignition-Quality, and Volatility Characteristics

    DOE PAGES

    Mueller, Charles J.; Cannella, William J.; Bruno, Thomas J.; ...

    2012-05-22

    In this study, a novel approach was developed to formulate surrogate fuels having characteristics that are representative of diesel fuels produced from real-world refinery streams. Because diesel fuels typically consist of hundreds of compounds, it is difficult to conclusively determine the effects of fuel composition on combustion properties. Surrogate fuels, being simpler representations of these practical fuels, are of interest because they can provide a better understanding of fundamental fuel-composition and property effects on combustion and emissions-formation processes in internal-combustion engines. In addition, the application of surrogate fuels in numerical simulations with accurate vaporization, mixing, and combustion models could revolutionizemore » future engine designs by enabling computational optimization for evolving real fuels. Dependable computational design would not only improve engine function, it would do so at significant cost savings relative to current optimization strategies that rely on physical testing of hardware prototypes. The approach in this study utilized the state-of-the-art techniques of 13C and 1H nuclear magnetic resonance spectroscopy and the advanced distillation curve to characterize fuel composition and volatility, respectively. The ignition quality was quantified by the derived cetane number. Two well-characterized, ultra-low-sulfur #2 diesel reference fuels produced from refinery streams were used as target fuels: a 2007 emissions certification fuel and a Coordinating Research Council (CRC) Fuels for Advanced Combustion Engines (FACE) diesel fuel. A surrogate was created for each target fuel by blending eight pure compounds. The known carbon bond types within the pure compounds, as well as models for the ignition qualities and volatilities of their mixtures, were used in a multiproperty regression algorithm to determine optimal surrogate formulations. The predicted and measured surrogate-fuel properties were

  19. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  20. FRACTURING FLUID CHARACTERIZATION FACILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subhash Shah

    2000-08-01

    Hydraulic fracturing technology has been successfully applied for well stimulation of low and high permeability reservoirs for numerous years. Treatment optimization and improved economics have always been the key to the success and it is more so when the reservoirs under consideration are marginal. Fluids are widely used for the stimulation of wells. The Fracturing Fluid Characterization Facility (FFCF) has been established to provide the accurate prediction of the behavior of complex fracturing fluids under downhole conditions. The primary focus of the facility is to provide valuable insight into the various mechanisms that govern the flow of fracturing fluids andmore » slurries through hydraulically created fractures. During the time between September 30, 1992, and March 31, 2000, the research efforts were devoted to the areas of fluid rheology, proppant transport, proppant flowback, dynamic fluid loss, perforation pressure losses, and frictional pressure losses. In this regard, a unique above-the-ground fracture simulator was designed and constructed at the FFCF, labeled ''The High Pressure Simulator'' (HPS). The FFCF is now available to industry for characterizing and understanding the behavior of complex fluid systems. To better reflect and encompass the broad spectrum of the petroleum industry, the FFCF now operates under a new name of ''The Well Construction Technology Center'' (WCTC). This report documents the summary of the activities performed during 1992-2000 at the FFCF.« less

  1. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  2. Accelerated viscoelastic characterization of T300-5208 graphite-epoxy laminates

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1985-01-01

    A viscoelastic response scheme for the accelerated characterization of polymer-based composite laminates in applied to T300/5208 graphite/epoxy. The response of uni-directional specimens is modeled. The transient component of the viscoelastic creep compliance is assumed to follow a power law approximation. A recursive relationship is developed, based upon the Schapery single-integral equation, which allows approximation of a continuous time-varying uniaxial load using discrete steps in stress. The viscoelastic response of T300/5208 to transverse normal and shear stresses is determined unsing 90 deg and 10 deg off-axis tensile specimens. In each case the seven viscoelastic material parameters required in the analysis are determined experimentally using short-term creep and creep recovery tests. It is shown that an accurate measure of the power law exponent is crucial for accurate long-term prediction. A short term test cycle selection procedure is proposed, which should provide useful guidelines for the evaluation of other viscoelastic materials.

  3. The need for accurate long-term measurements of water vapor in the upper troposphere and lower stratosphere with global coverage.

    PubMed

    Müller, Rolf; Kunz, Anne; Hurst, Dale F; Rolf, Christian; Krämer, Martina; Riese, Martin

    2016-02-01

    Water vapor is the most important greenhouse gas in the atmosphere although changes in carbon dioxide constitute the "control knob" for surface temperatures. While the latter fact is well recognized, resulting in extensive space-borne and ground-based measurement programs for carbon dioxide as detailed in the studies by Keeling et al. (1996), Kuze et al. (2009), and Liu et al. (2014), the need for an accurate characterization of the long-term changes in upper tropospheric and lower stratospheric (UTLS) water vapor has not yet resulted in sufficiently extensive long-term international measurement programs (although first steps have been taken). Here, we argue for the implementation of a long-term balloon-borne measurement program for UTLS water vapor covering the entire globe that will likely have to be sustained for hundreds of years.

  4. Characterizing the Background Corona with SDO/AIA

    NASA Technical Reports Server (NTRS)

    Napier, Kate; Alexander, Caroline; Winebarger, Amy

    2014-01-01

    Characterizing the nature of the solar coronal background would enable scientists to more accurately determine plasma parameters, and may lead to a better understanding of the coronal heating problem. Because scientists study the 3D structure of the Sun in 2D, any line-of-sight includes both foreground and background material, and thus, the issue of background subtraction arises. By investigating the intensity values in and around an active region, using multiple wavelengths collected from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO) over an eight-hour period, this project aims to characterize the background as smooth or structured. Different methods were employed to measure the true coronal background and create minimum intensity images. These were then investigated for the presence of structure. The background images created were found to contain long-lived structures, including coronal loops, that were still present in all of the wavelengths, 131, 171, 193, 211, and 335 A. The intensity profiles across the active region indicate that the background is much more structured than previously thought.

  5. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  6. Device accurately measures and records low gas-flow rates

    NASA Technical Reports Server (NTRS)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  7. Toward more accurate loss tangent measurements in reentrant cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moyer, R. D.

    1980-05-01

    Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.

  8. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    DTIC Science & Technology

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  9. Carbon Contamination During Ion Irradiation - Accurate Detection and Characterization of its Effect on Microstructure of Ferritic/Martensitic Steels

    DOE PAGES

    Wang, Jing; Toloczko, Mychailo B.; Kruska, Karen; ...

    2017-11-17

    Accelerator-based ion beam irradiation techniques have been used to study radiation effects in materials for decades. Although carbon contamination induced by ion beams in target materials is a well-known issue in some material systems, it has not been fully characterized nor quantified for studies in ferritic/martensitic (F/M) steels that are candidate materials for applications such as core structural components in advanced nuclear reactors. It is an especially important issue for this class of material because of the strong effect of carbon level on precipitate formation. In this paper, the ability to quantify carbon contamination using three common techniques, namely time-of-flightmore » secondary ion mass spectroscopy (ToF-SIMS), atom probe tomography (APT), and transmission electron microscopy (TEM) is compared. Their effectiveness and shortcomings in determining carbon contamination are presented and discussed. The corresponding microstructural changes related to carbon contamination in ion irradiated F/M steels are also presented and briefly discussed.« less

  10. A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Stamer, Torsten; Inutsuka, Shu-ichiro

    2018-06-01

    We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.

  11. Cell-accurate optical mapping across the entire developing heart.

    PubMed

    Weber, Michael; Scherf, Nico; Meyer, Alexander M; Panáková, Daniela; Kohl, Peter; Huisken, Jan

    2017-12-29

    Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca 2+ -mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs.

  12. Cell-accurate optical mapping across the entire developing heart

    PubMed Central

    Meyer, Alexander M; Panáková, Daniela; Kohl, Peter

    2017-01-01

    Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca2+-mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs. PMID:29286002

  13. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER [Book Chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less

  14. Background Characterization for Thermal Ion Release Experiments with 224Ra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwong, H.; /Stanford U., Phys. Dept.; Rowson, P.

    The Enriched Xenon Observatory for neutrinoless double beta decay uses {sup 136}Ba identification as a means for verifying the decay's occurrence in {sup 136}Xe. A current challenge is the release of Ba ions from the Ba extraction probe, and one possible solution is to heat the probe to high temperatures to release the ions. The investigation of this method requires a characterization of the alpha decay background in our test apparatus, which uses a {sup 228}Th source that produces {sup 224}Ra daughters, the ionization energies of which are similar to those of Ba. For this purpose, we ran a backgroundmore » count with our apparatus maintained at a vacuum, and then three counts with the apparatus filled with Xe gas. We were able to match up our alpha spectrum in vacuum with the known decay scheme of {sup 228}Th, while the spectrum in xenon gas had too many unresolved ambiguities for an accurate characterization. We also found that the alpha decays occurred at a near-zero rate both in vacuum and in xenon gas, which indicates that the rate was determined by {sup 228}Th decays. With these background measurements, we can in the future make a more accurate measurement of the temperature dependency of the ratio of ions to neutral atoms released from the hot surface of the probe, which may lead to a successful method of Ba ion release.« less

  15. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  16. A feasibility study on porting the community land model onto accelerators using OpenACC

    DOE PAGES

    Wang, Dali; Wu, Wei; Winkler, Frank; ...

    2014-01-01

    As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less

  17. Using In-Service and Coaching to Increase Teachers' Accurate Use of Research-Based Strategies

    ERIC Educational Resources Information Center

    Kretlow, Allison G.; Cooke, Nancy L.; Wood, Charles L.

    2012-01-01

    Increasing the accurate use of research-based practices in classrooms is a critical issue. Professional development is one of the most practical ways to provide practicing teachers with training related to research-based practices. This study examined the effects of in-service plus follow-up coaching on first grade teachers' accurate delivery of…

  18. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  19. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-09-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  20. Nanoparticle surface characterization and clustering through concentration-dependent surface adsorption modeling.

    PubMed

    Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E

    2014-09-23

    Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.

  1. Methodological Guidelines for Accurate Detection of Viruses in Wild Plant Species.

    PubMed

    Lacroix, Christelle; Renner, Kurra; Cole, Ellen; Seabloom, Eric W; Borer, Elizabeth T; Malmstrom, Carolyn M

    2016-01-15

    Ecological understanding of disease risk, emergence, and dynamics and of the efficacy of control strategies relies heavily on efficient tools for microorganism identification and characterization. Misdetection, such as the misclassification of infected hosts as healthy, can strongly bias estimates of disease prevalence and lead to inaccurate conclusions. In natural plant ecosystems, interest in assessing microbial dynamics is increasing exponentially, but guidelines for detection of microorganisms in wild plants remain limited, particularly so for plant viruses. To address this gap, we explored issues and solutions associated with virus detection by serological and molecular methods in noncrop plant species as applied to the globally important Barley yellow dwarf virus PAV (Luteoviridae), which infects wild native plants as well as crops. With enzyme-linked immunosorbent assays (ELISA), we demonstrate how virus detection in a perennial wild plant species may be much greater in stems than in leaves, although leaves are most commonly sampled, and may also vary among tillers within an individual, thereby highlighting the importance of designing effective sampling strategies. With reverse transcription-PCR (RT-PCR), we demonstrate how inhibitors in tissues of perennial wild hosts can suppress virus detection but can be overcome with methods and products that improve isolation and amplification of nucleic acids. These examples demonstrate the paramount importance of testing and validating survey designs and virus detection methods for noncrop plant communities to ensure accurate ecological surveys and reliable assumptions about virus dynamics in wild hosts. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  2. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.

  3. Vision drives accurate approach behavior during prey capture in laboratory mice

    PubMed Central

    Hoy, Jennifer L.; Yavorska, Iryna; Wehr, Michael; Niell, Cristopher M.

    2016-01-01

    Summary The ability to genetically identify and manipulate neural circuits in the mouse is rapidly advancing our understanding of visual processing in the mammalian brain [1,2]. However, studies investigating the circuitry that underlies complex ethologically-relevant visual behaviors in the mouse have been primarily restricted to fear responses [3–5]. Here, we show that a laboratory strain of mouse (Mus musculus, C57BL/6J) robustly pursues, captures and consumes live insect prey, and that vision is necessary for mice to perform the accurate orienting and approach behaviors leading to capture. Specifically, we differentially perturbed visual or auditory input in mice and determined that visual input is required for accurate approach, allowing maintenance of bearing to within 11 degrees of the target on average during pursuit. While mice were able to capture prey without vision, the accuracy of their approaches and capture rate dramatically declined. To better explore the contribution of vision to this behavior, we developed a simple assay that isolated visual cues and simplified analysis of the visually guided approach. Together, our results demonstrate that laboratory mice are capable of exhibiting dynamic and accurate visually-guided approach behaviors, and provide a means to estimate the visual features that drive behavior within an ethological context. PMID:27773567

  4. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  5. High Resolution Characterization of Engineered Nanomaterial Dispersions in Complex Media Using Tunable Resistive Pulse Sensing Technology

    PubMed Central

    2015-01-01

    In vitro toxicity assessment of engineered nanomaterials (ENM), the most common testing platform for ENM, requires prior ENM dispersion, stabilization, and characterization in cell culture media. Dispersion inefficiencies and active aggregation of particles often result in polydisperse and multimodal particle size distributions. Accurate characterization of important properties of such polydisperse distributions (size distribution, effective density, charge, mobility, aggregation kinetics, etc.) is critical for understanding differences in the effective dose delivered to cells as a function of time and dispersion conditions, as well as for nano–bio interactions. Here we have investigated the utility of tunable nanopore resistive pulse sensing (TRPS) technology for characterization of four industry relevant ENMs (oxidized single-walled carbon nanohorns, carbon black, cerium oxide and nickel nanoparticles) in cell culture media containing serum. Harvard dispersion and dosimetry platform was used for preparing ENM dispersions and estimating delivered dose to cells based on dispersion characterization input from dynamic light scattering (DLS) and TRPS. The slopes of cell death vs administered and delivered ENM dose were then derived and compared. We investigated the impact of serum protein content, ENM concentration, and cell medium on the size distributions. The TRPS technology offers higher resolution and sensitivity compared to DLS and unique insights into ENM size distribution and concentration, as well as particle behavior and morphology in complex media. The in vitro dose–response slopes changed significantly for certain nanomaterials when delivered dose to cells was taken into consideration, highlighting the importance of accurate dispersion and dosimetry in in vitro nanotoxicology. PMID:25093451

  6. Validation of reference genes aiming accurate normalization of qRT-PCR data in Dendrocalamus latiflorus Munro.

    PubMed

    Liu, Mingying; Jiang, Jing; Han, Xiaojiao; Qiao, Guirong; Zhuo, Renying

    2014-01-01

    Dendrocalamus latiflorus Munro distributes widely in subtropical areas and plays vital roles as valuable natural resources. The transcriptome sequencing for D. latiflorus Munro has been performed and numerous genes especially those predicted to be unique to D. latiflorus Munro were revealed. qRT-PCR has become a feasible approach to uncover gene expression profiling, and the accuracy and reliability of the results obtained depends upon the proper selection of stable reference genes for accurate normalization. Therefore, a set of suitable internal controls should be validated for D. latiflorus Munro. In this report, twelve candidate reference genes were selected and the assessment of gene expression stability was performed in ten tissue samples and four leaf samples from seedlings and anther-regenerated plants of different ploidy. The PCR amplification efficiency was estimated, and the candidate genes were ranked according to their expression stability using three software packages: geNorm, NormFinder and Bestkeeper. GAPDH and EF1α were characterized to be the most stable genes among different tissues or in all the sample pools, while CYP showed low expression stability. RPL3 had the optimal performance among four leaf samples. The application of verified reference genes was illustrated by analyzing ferritin and laccase expression profiles among different experimental sets. The analysis revealed the biological variation in ferritin and laccase transcript expression among the tissues studied and the individual plants. geNorm, NormFinder, and BestKeeper analyses recommended different suitable reference gene(s) for normalization according to the experimental sets. GAPDH and EF1α had the highest expression stability across different tissues and RPL3 for the other sample set. This study emphasizes the importance of validating superior reference genes for qRT-PCR analysis to accurately normalize gene expression of D. latiflorus Munro.

  7. Colorimetric characterization models based on colorimetric characteristics evaluation for active matrix organic light emitting diode panels.

    PubMed

    Gong, Rui; Xu, Haisong; Tong, Qingfen

    2012-10-20

    The colorimetric characterization of active matrix organic light emitting diode (AMOLED) panels suffers from their poor channel independence. Based on the colorimetric characteristics evaluation of channel independence and chromaticity constancy, an accurate colorimetric characterization method, namely, the polynomial compensation model (PC model) considering channel interactions was proposed for AMOLED panels. In this model, polynomial expressions are employed to calculate the relationship between the prediction errors of XYZ tristimulus values and the digital inputs to compensate the XYZ prediction errors of the conventional piecewise linear interpolation assuming the variable chromaticity coordinates (PLVC) model. The experimental results indicated that the proposed PC model outperformed other typical characterization models for the two tested AMOLED smart-phone displays and for the professional liquid crystal display monitor as well.

  8. Methods and apparatus for non-acoustic speech characterization and recognition

    DOEpatents

    Holzrichter, John F.

    1999-01-01

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  9. Methods and apparatus for non-acoustic speech characterization and recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  10. Solid State Characterizations of Long-Term Leached Cast Stone Monoliths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asmussen, Robert M.; Pearce, Carolyn I.; Parker, Kent E.

    This report describes the results from the solid phase characterization of six Cast Stone monoliths from the extended leach tests recently reported on (Serne et al. 2016),that were selected for characterization using multiple state-of-the-art approaches. The Cast Stone samples investigated were leached for > 590 d in the EPA Method 1315 test then archived for > 390 d in their final leachate. After reporting the long term leach behavior of the monoliths (containing radioactive 99Tc and stable 127I spikes and for original Westsik et al. 2013 fabricated monoliths, 238U), it was suggested that physical changes to the waste forms andmore » a depleting inventory of contaminants of potential concern may mean that effective diffusivity calculations past 63 d should not be used to accurately represent long-term waste form behavior. These novel investigations, in both length of leaching time and application of solid state techniques, provide an initial arsenal of techniques which can be utilized to perform such Cast Stone solid phase characterization work, which in turn can support upcoming performance assessment maintenance. The work was performed at Pacific Northwest National Laboratory (PNNL) for Washington River Protection Solutions (WRPS) to characterize several properties of the long- term leached Cast Stone monolith samples.« less

  11. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  12. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  13. Rapid glucosinolate detection and identification using accurate mass MS-MS

    USDA-ARS?s Scientific Manuscript database

    Currently, there is a demand for accurate evaluation of brassica plat species for their glucosinolate content. An optimized method has been developed for detecting and identifying glucosinolates in plant extracts using MS-MS fragmentation with ion trap collision induced dissociation (CID) and higher...

  14. Characterizing Drainage Multiphase Flow in Heterogeneous Sandstones

    NASA Astrophysics Data System (ADS)

    Jackson, Samuel J.; Agada, Simeon; Reynolds, Catriona A.; Krevor, Samuel

    2018-04-01

    In this work, we analyze the characterization of drainage multiphase flow properties on heterogeneous rock cores using a rich experimental data set and mm-m scale numerical simulations. Along with routine multiphase flow properties, 3-D submeter scale capillary pressure heterogeneity is characterized by combining experimental observations and numerical calibration, resulting in a 3-D numerical model of the rock core. The uniqueness and predictive capability of the numerical models are evaluated by accurately predicting the experimentally measured relative permeability of N2—DI water and CO2—brine systems in two distinct sandstone rock cores across multiple fractional flow regimes and total flow rates. The numerical models are used to derive equivalent relative permeabilities, which are upscaled functions incorporating the effects of submeter scale capillary pressure. The functions are obtained across capillary numbers which span four orders of magnitude, representative of the range of flow regimes that occur in subsurface CO2 injection. Removal of experimental boundary artifacts allows the derivation of equivalent functions which are characteristic of the continuous subsurface. We also demonstrate how heterogeneities can be reorientated and restructured to efficiently estimate flow properties in rock orientations differing from the original core sample. This analysis shows how combined experimental and numerical characterization of rock samples can be used to derive equivalent flow properties from heterogeneous rocks.

  15. Method and apparatus for characterizing reflected ultrasonic pulses

    NASA Technical Reports Server (NTRS)

    Yost, William T. (Inventor); Cantrell, John H., Jr. (Inventor)

    1991-01-01

    The invention is a method of and apparatus for characterizing the amplitudes of a sequence of reflected pulses R1, R2, and R3 by converting them into corresponding electric signals E1, E2, and E3 to substantially the same value during each sequence thereby restoring the reflected pulses R1, R2, and R3 to their initial reflection values by timing means, an exponential generator, and a time gain compensator. Envelope and baseline reject circuits permit the display and accurate location of the time spaced sequence of electric signals having substantially the same amplitude on a measurement scale on a suitable video display or oscilloscope.

  16. An accurate metric for the spacetime around rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  17. Method for accurate sizing of pulmonary vessels from 3D medical images

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2015-03-01

    Detailed characterization of vascular anatomy, in particular the quantification of changes in the distribution of vessel sizes and of vascular pruning, is essential for the diagnosis and management of a variety of pulmonary vascular diseases and for the care of cancer survivors who have received radiation to the thorax. Clinical estimates of vessel radii are typically based on setting a pixel intensity threshold and counting how many "On" pixels are present across the vessel cross-section. A more objective approach introduced recently involves fitting the image with a library of spherical Gaussian filters and utilizing the size of the best matching filter as the estimate of vessel diameter. However, both these approaches have significant accuracy limitations including mis-match between a Gaussian intensity distribution and that of real vessels. Here we introduce and demonstrate a novel approach for accurate vessel sizing using 3D appearance models of a tubular structure along a curvilinear trajectory in 3D space. The vessel branch trajectories are represented with cubic Hermite splines and the tubular branch surfaces represented as a finite element surface mesh. An iterative parameter adjustment scheme is employed to optimally match the appearance models to a patient's chest X-ray computed tomography (CT) scan to generate estimates for branch radii and trajectories with subpixel resolution. The method is demonstrated on pulmonary vasculature in an adult human CT scan, and on 2D simulated test cases.

  18. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    PubMed

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  20. Learning fast accurate movements requires intact frontostriatal circuits

    PubMed Central

    Shabbott, Britne; Ravindran, Roshni; Schumacher, Joseph W.; Wasserman, Paula B.; Marder, Karen S.; Mazzoni, Pietro

    2013-01-01

    The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound), and difficulties in distinguishing learning deficits from execution impairments (performance confound). We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements. We addressed the definition and performance confounds by: (1) focusing on an operationally defined core element of motor skill learning (speed-accuracy learning), and (2) using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington's disease (HD), a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate) than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning. PMID:24312037

  1. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Fast P.; Kraus, M.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that thesemore » data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.« less

  2. Accurate atom-mapping computation for biochemical reactions.

    PubMed

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  3. Accurate reconstruction of the jV-characteristic of organic solar cells from measurements of the external quantum efficiency

    NASA Astrophysics Data System (ADS)

    Meyer, Toni; Körner, Christian; Vandewal, Koen; Leo, Karl

    2018-04-01

    In two terminal tandem solar cells, the current density - voltage (jV) characteristic of the individual subcells is typically not directly measurable, but often required for a rigorous device characterization. In this work, we reconstruct the jV-characteristic of organic solar cells from measurements of the external quantum efficiency under applied bias voltages and illumination. We show that it is necessary to perform a bias irradiance variation at each voltage and subsequently conduct a mathematical correction of the differential to the absolute external quantum efficiency to obtain an accurate jV-characteristic. Furthermore, we show that measuring the external quantum efficiency as a function of voltage for a single bias irradiance of 0.36 AM1.5g equivalent sun provides a good approximation of the photocurrent density over voltage curve. The method is tested on a selection of efficient, common single-junctions. The obtained conclusions can easily be transferred to multi-junction devices with serially connected subcells.

  4. Characterization of the Mysteriously Cool Brown Dwarf HD 4113

    NASA Astrophysics Data System (ADS)

    Ednie, Michaela; Follette, Katherine; Ward-Duong, Kimberly

    2018-01-01

    Characterizing the physical properties of brown dwarfs is necessary to expand and improve our understanding of low mass companions, including exoplanets. Systems with both close radial velocity companions and distant directly imaged companions are particularly powerful in understanding planet formation mechanisms. Early in 2017, members of the SPHERE team discovered a companion brown dwarf in the HD 4113 system, which also contains a known RV planet. Atmospheric model fits to the Y and J-band spectra and H2/H3 photometry of the brown dwarf suggested it is unusually cool. We obtained new Magellan data in the Z and K’ bands in mid-2017. This data will help us to complete a more detailed atmospheric and astrometric characterization of this unusually cool companion. Broader wavelength coverage will help in accurate spectral typing and estimations of luminosity, temperature, surface gravity, radius, and composition. Additionally, a second astrometric epoch will help constrain the architecture of the system.

  5. Multi-Sensor Characterization of the Boreal Forest: Initial Findings

    NASA Technical Reports Server (NTRS)

    Reith, Ernest; Roberts, Dar A.; Prentiss, Dylan

    2001-01-01

    Results are presented in an initial apriori knowledge approach toward using complementary multi-sensor multi-temporal imagery in characterizing vegetated landscapes over a site in the Boreal Ecosystem-Atmosphere Study (BOREAS). Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data were segmented using multiple endmember spectral mixture analysis and binary decision tree approaches. Individual date/sensor land cover maps had overall accuracies between 55.0% - 69.8%. The best eight land cover layers from all dates and sensors correctly characterized 79.3% of the cover types. An overlay approach was used to create a final land cover map. An overall accuracy of 71.3% was achieved in this multi-sensor approach, a 1.5% improvement over our most accurate single scene technique, but 8% less than the original input. Black spruce was evaluated to be particularly undermapped in the final map possibly because it was also contained within jack pine and muskeg land coverages.

  6. Small-Molecule Binding Aptamers: Selection Strategies, Characterization, and Applications

    PubMed Central

    Ruscito, Annamaria; DeRosa, Maria C.

    2016-01-01

    Aptamers are single-stranded, synthetic oligonucleotides that fold into 3-dimensional shapes capable of binding non-covalently with high affinity and specificity to a target molecule. They are generated via an in vitro process known as the Systematic Evolution of Ligands by EXponential enrichment, from which candidates are screened and characterized, and then used in various applications. These applications range from therapeutic uses to biosensors for target detection. Aptamers for small molecule targets such as toxins, antibiotics, molecular markers, drugs, and heavy metals will be the focus of this review. Their accurate detection is needed for the protection and wellbeing of humans and animals. However, the small molecular weights of these targets, including the drastic size difference between the target and the oligonucleotides, make it challenging to select, characterize, and apply aptamers for their detection. Thus, recent (since 2012) notable advances in small molecule aptamers, which have overcome some of these challenges, are presented here, while defining challenges that still exist are discussed. PMID:27242994

  7. Detection and characterization of pulses in broadband seismometers

    USGS Publications Warehouse

    Wilson, David; Ringler, Adam; Hutt, Charles R.

    2017-01-01

    Pulsing - caused either by mechanical or electrical glitches, or by microtilt local to a seismometer - can significantly compromise the long‐period noise performance of broadband seismometers. High‐fidelity long‐period recordings are needed for accurate calculation of quantities such as moment tensors, fault‐slip models, and normal‐mode measurements. Such pulses have long been recognized in accelerometers, and methods have been developed to correct these acceleration steps, but considerable work remains to be done in order to detect and correct similar pulses in broadband seismic data. We present a method for detecting and characterizing the pulses using data from a range of broadband sensor types installed in the Global Seismographic Network. The technique relies on accurate instrument response removal and employs a moving‐window approach looking for acceleration baseline shifts. We find that pulses are present at varying levels in all sensor types studied. Pulse‐detection results compared with average daily station noise values are consistent with predicted noise levels of acceleration steps. This indicates that we can calculate maximum pulse amplitude allowed per time window that would be acceptable without compromising long‐period data analysis.

  8. Anatomical brain images alone can accurately diagnose chronic neuropsychiatric illnesses.

    PubMed

    Bansal, Ravi; Staib, Lawrence H; Laine, Andrew F; Hao, Xuejun; Xu, Dongrong; Liu, Jun; Weissman, Myrna; Peterson, Bradley S

    2012-01-01

    Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders. Extensions of these methods are likely to provide biomarkers

  9. Highly accurate quantitative spectroscopy of massive stars in the Galaxy

    NASA Astrophysics Data System (ADS)

    Nieva, María-Fernanda; Przybilla, Norbert

    2017-11-01

    Achieving high accuracy and precision in stellar parameter and chemical composition determinations is challenging in massive star spectroscopy. On one hand, the target selection for an unbiased sample build-up is complicated by several types of peculiarities that can occur in individual objects. On the other hand, composite spectra are often not recognized as such even at medium-high spectral resolution and typical signal-to-noise ratios, despite multiplicity among massive stars is widespread. In particular, surveys that produce large amounts of automatically reduced data are prone to oversight of details that turn hazardous for the analysis with techniques that have been developed for a set of standard assumptions applicable to a spectrum of a single star. Much larger systematic errors than anticipated may therefore result because of the unrecognized true nature of the investigated objects, or much smaller sample sizes of objects for the analysis than initially planned, if recognized. More factors to be taken care of are the multiple steps from the choice of instrument over the details of the data reduction chain to the choice of modelling code, input data, analysis technique and the selection of the spectral lines to be analyzed. Only when avoiding all the possible pitfalls, a precise and accurate characterization of the stars in terms of fundamental parameters and chemical fingerprints can be achieved that form the basis for further investigations regarding e.g. stellar structure and evolution or the chemical evolution of the Galaxy. The scope of the present work is to provide the massive star and also other astrophysical communities with criteria to evaluate the quality of spectroscopic investigations of massive stars before interpreting them in a broader context. The discussion is guided by our experiences made in the course of over a decade of studies of massive star spectroscopy ranging from the simplest single objects to multiple systems.

  10. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  11. Arbitrarily accurate twin composite π -pulse sequences

    NASA Astrophysics Data System (ADS)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  12. Ab initio study of the CO-N2 complex: a new highly accurate intermolecular potential energy surface and rovibrational spectrum.

    PubMed

    Cybulski, Hubert; Henriksen, Christian; Dawes, Richard; Wang, Xiao-Gang; Bora, Neha; Avila, Gustavo; Carrington, Tucker; Fernández, Berta

    2018-05-09

    A new, highly accurate ab initio ground-state intermolecular potential-energy surface (IPES) for the CO-N2 complex is presented. Thousands of interaction energies calculated with the CCSD(T) method and Dunning's aug-cc-pVQZ basis set extended with midbond functions were fitted to an analytical function. The global minimum of the potential is characterized by an almost T-shaped structure and has an energy of -118.2 cm-1. The symmetry-adapted Lanczos algorithm was used to compute rovibrational energies (up to J = 20) on the new IPES. The RMSE with respect to experiment was found to be on the order of 0.038 cm-1 which confirms the very high accuracy of the potential. This level of agreement is among the best reported in the literature for weakly bound systems and considerably improves on those of previously published potentials.

  13. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  14. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  15. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  16. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  17. Machine Learning of Accurate Energy-Conserving Molecular Force Fields

    NASA Astrophysics Data System (ADS)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert; GDML Collaboration

    Efficient and accurate access to the Born-Oppenheimer potential energy surface (PES) is essential for long time scale molecular dynamics (MD) simulations. Using conservation of energy - a fundamental property of closed classical and quantum mechanical systems - we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio MD trajectories (AIMD). The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/Å for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, malonaldehyde, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative MD simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.

  18. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  19. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  20. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  1. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  2. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  3. Using a Novel Optical Sensor to Characterize Methane Ebullition Processes

    NASA Astrophysics Data System (ADS)

    Delwiche, K.; Hemond, H.; Senft-Grupp, S.

    2015-12-01

    We have built a novel bubble size sensor that is rugged, economical to build, and capable of accurately measuring methane bubble sizes in aquatic environments over long deployment periods. Accurate knowledge of methane bubble size is important to calculating atmospheric methane emissions from in-land waters. By routing bubbles past pairs of optical detectors, the sensor accurately measures bubbles sizes for bubbles between 0.01 mL and 1 mL, with slightly reduced accuracy for bubbles from 1 mL to 1.5 mL. The sensor can handle flow rates up to approximately 3 bubbles per second. Optional sensor attachments include a gas collection chamber for methane sampling and volume verification, and a detachable extension funnel to customize the quantity of intercepted bubbles. Additional features include a data-cable running from the deployed sensor to a custom surface buoy, allowing us to download data without disturbing on-going bubble measurements. We have successfully deployed numerous sensors in Upper Mystic Lake at depths down to 18 m, 1 m above the sediment. The resulting data gives us bubble size distributions and the precise timing of bubbling events over a period of several months. In addition to allowing us to characterize typical bubble size distributions, this data allows us to draw important conclusions about temporal variations in bubble sizes, as well as bubble dissolution rates within the water column.

  4. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  5. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  6. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  7. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  8. The Rényi divergence enables accurate and precise cluster analysis for localisation microscopy.

    PubMed

    Staszowska, Adela D; Fox-Roberts, Patrick; Hirvonen, Liisa M; Peddie, Christopher J; Collinson, Lucy M; Jones, Gareth E; Cox, Susan

    2018-06-01

    Clustering analysis is a key technique for quantitatively characterising structures in localisation microscopy images. To build up accurate information about biological structures, it is critical that the quantification is both accurate (close to the ground truth) and precise (has small scatter and is reproducible). Here we describe how the Rényi divergence can be used for cluster radius measurements in localisation microscopy data. We demonstrate that the Rényi divergence can operate with high levels of background and provides results which are more accurate than Ripley's functions, Voronoi tesselation or DBSCAN. Data supporting this research will be made accessible via a web link. Software codes developed for this work can be accessed via http://coxphysics.com/Renyi_divergence_software.zip. Implemented in C ++. Correspondence and requests for materials can be also addressed to the corresponding author. adela.staszowska@gmail.com or susan.cox@kcl.ac.uk. Supplementary data are available at Bioinformatics online.

  9. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  10. Metabolite identification of triptolide by data-dependent accurate mass spectrometric analysis in combination with online hydrogen/deuterium exchange and multiple data-mining techniques.

    PubMed

    Du, Fuying; Liu, Ting; Liu, Tian; Wang, Yongwei; Wan, Yakun; Xing, Jie

    2011-10-30

    Triptolide (TP), the primary active component of the herbal medicine Tripterygium wilfordii Hook F, has shown promising antileukemic and anti-inflammatory activity. The pharmacokinetic profile of TP indicates an extensive metabolic elimination in vivo; however, its metabolic data is rarely available partly because of the difficulty in identifying it due to the absence of appropriate ultraviolet chromophores in the structure and the presence of endogenous interferences in biological samples. In the present study, the biotransformation of TP was investigated by improved data-dependent accurate mass spectrometric analysis, using an LTQ/Orbitrap hybrid mass spectrometer in conjunction with the online hydrogen (H)/deuterium (D) exchange technique for rapid structural characterization. Accurate full-scan MS and MS/MS data were processed with multiple post-acquisition data-mining techniques, which were complementary and effective in detecting both common and uncommon metabolites from biological matrices. As a result, 38 phase I, 9 phase II and 8 N-acetylcysteine (NAC) metabolites of TP were found in rat urine. Accurate MS/MS data were used to support assignments of metabolite structures, and online H/D exchange experiments provided additional evidence for exchangeable hydrogen atoms in the structure. The results showed the main phase I metabolic pathways of TP are hydroxylation, hydrolysis and desaturation, and the resulting metabolites subsequently undergo phase II processes. The presence of NAC conjugates indicated the capability of TP to form reactive intermediate species. This study also demonstrated the effectiveness of LC/HR-MS(n) in combination with multiple post-acquisition data-mining methods and the online H/D exchange technique for the rapid identification of drug metabolites. Copyright © 2011 John Wiley & Sons, Ltd.

  11. An All-Fragments Grammar for Simple and Accurate Parsing

    DTIC Science & Technology

    2012-03-21

    Tsujii. Probabilistic CFG with latent annotations. In Proceedings of ACL, 2005. Slav Petrov and Dan Klein. Improved Inference for Unlexicalized Parsing. In...Proceedings of NAACL-HLT, 2007. Slav Petrov and Dan Klein. Sparse Multi-Scale Grammars for Discriminative Latent Variable Parsing. In Proceedings of...EMNLP, 2008. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings

  12. Many participants in inpatient rehabilitation can quantify their exercise dosage accurately: an observational study.

    PubMed

    Scrivener, Katharine; Sherrington, Catherine; Schurr, Karl; Treacy, Daniel

    2011-01-01

    Are inpatients undergoing rehabilitation who appear able to count exercises able to quantify accurately the amount of exercise they undertake? Observational study. Inpatients in an aged care rehabilitation unit and a neurological rehabilitation unit, who appeared able to count their exercises during a 1-2 min observation by their treating physiotherapist. Participants were observed for 30 min by an external observer while they exercised in the physiotherapy gymnasium. Both the participants and the observer counted exercise repetitions with a hand-held tally counter and the two tallies were compared. Of the 60 people admitted for aged care rehabilitation during the study period, 49 (82%) were judged by their treating therapist to be able to count their own exercise repetitions accurately. Of the 30 people admitted for neurological rehabilitation during the study period, 20 (67%) were judged by their treating therapist to be able to count their repetitions accurately. Of the 69 people judged to be accurate, 40 underwent observation while exercising. There was excellent agreement between these participants' counts of their exercise repetitions and the observers' counts, ICC (3,1) of 0.99 (95% CI 0.98 to 0.99). Eleven participants (28%) were in complete agreement with the observer. A further 19 participants (48%) varied from the observer by less than 10%. Therapists were able to identify a group of rehabilitation participants who were accurate in counting their exercise repetitions. Counting of exercise repetitions by therapist-selected patients is a valid means of quantifying exercise dosage during inpatient rehabilitation. Copyright © 2011 Australian Physiotherapy Association. Published by .. All rights reserved.

  13. Ultrathin conformal devices for precise and continuous thermal characterization of human skin

    NASA Astrophysics Data System (ADS)

    Webb, R. Chad; Bonifas, Andrew P.; Behnaz, Alex; Zhang, Yihui; Yu, Ki Jun; Cheng, Huanyu; Shi, Mingxing; Bian, Zuguang; Liu, Zhuangjian; Kim, Yun-Soung; Yeo, Woon-Hong; Park, Jae Suk; Song, Jizhou; Li, Yuhang; Huang, Yonggang; Gorbach, Alexander M.; Rogers, John A.

    2013-10-01

    Precision thermometry of the skin can, together with other measurements, provide clinically relevant information about cardiovascular health, cognitive state, malignancy and many other important aspects of human physiology. Here, we introduce an ultrathin, compliant skin-like sensor/actuator technology that can pliably laminate onto the epidermis to provide continuous, accurate thermal characterizations that are unavailable with other methods. Examples include non-invasive spatial mapping of skin temperature with millikelvin precision, and simultaneous quantitative assessment of tissue thermal conductivity. Such devices can also be implemented in ways that reveal the time-dynamic influence of blood flow and perfusion on these properties. Experimental and theoretical studies establish the underlying principles of operation, and define engineering guidelines for device design. Evaluation of subtle variations in skin temperature associated with mental activity, physical stimulation and vasoconstriction/dilation along with accurate determination of skin hydration through measurements of thermal conductivity represent some important operational examples.

  14. Ab Initio Potential Energy Surfaces and the Calculation of Accurate Vibrational Frequencies

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Dateo, Christopher E.; Martin, Jan M. L.; Taylor, Peter R.; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    Due to advances in quantum mechanical methods over the last few years, it is now possible to determine ab initio potential energy surfaces in which fundamental vibrational frequencies are accurate to within plus or minus 8 cm(exp -1) on average, and molecular bond distances are accurate to within plus or minus 0.001-0.003 Angstroms, depending on the nature of the bond. That is, the potential energy surfaces have not been scaled or empirically adjusted in any way, showing that theoretical methods have progressed to the point of being useful in analyzing spectra that are not from a tightly controlled laboratory environment, such as vibrational spectra from the interstellar medium. Some recent examples demonstrating this accuracy will be presented and discussed. These include the HNO, CH4, C2H4, and ClCN molecules. The HNO molecule is interesting due to the very large H-N anharmonicity, while ClCN has a very large Fermi resonance. The ab initio studies for the CH4 and C2H4 molecules present the first accurate full quartic force fields of any kind (i.e., whether theoretical or empirical) for a five-atom and six-atom system, respectively.

  15. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  16. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  17. How to Build MCNP 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Jeffrey S.

    This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.

  18. Cognitive learning: a machine learning approach for automatic process characterization from design

    NASA Astrophysics Data System (ADS)

    Foucher, J.; Baderot, J.; Martinez, S.; Dervilllé, A.; Bernard, G.

    2018-03-01

    Cutting edge innovation requires accurate and fast process-control to obtain fast learning rate and industry adoption. Current tools available for such task are mainly manual and user dependent. We present in this paper cognitive learning, which is a new machine learning based technique to facilitate and to speed up complex characterization by using the design as input, providing fast training and detection time. We will focus on the machine learning framework that allows object detection, defect traceability and automatic measurement tools.

  19. Morphological characterization of coral reefs by combining lidar and MBES data: A case study from Yuanzhi Island, South China Sea

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Yang, Fanlin; Zhang, Hande; Su, Dianpeng; Li, QianQian

    2017-06-01

    The correlation between seafloor morphological features and biological complexity has been identified in numerous recent studies. This research focused on the potential for accurate characterization of coral reefs based on high-resolution bathymetry from multiple sources. A standard deviation (STD) based method for quantitatively characterizing terrain complexity was developed that includes robust estimation to correct for irregular bathymetry and a calibration for the depth-dependent variablity of measurement noise. Airborne lidar and shipborne sonar bathymetry measurements from Yuanzhi Island, South China Sea, were merged to generate seamless high-resolution coverage of coral bathymetry from the shoreline to deep water. The new algorithm was applied to the Yuanzhi Island surveys to generate maps of quantitive terrain complexity, which were then compared to in situ video observations of coral abundance. The terrain complexity parameter is significantly correlated with seafloor coral abundance, demonstrating the potential for accurately and efficiently mapping coral abundance through seafloor surveys, including combinations of surveys using different sensors.

  20. Characterization of turbulence stability through the identification of multifractional Brownian motions

    NASA Astrophysics Data System (ADS)

    Lee, K. C.

    2013-02-01

    Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.