Sample records for postprocessing procedures developed

  1. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  2. Additive manufacturing of reflective optics: evaluating finishing methods

    NASA Astrophysics Data System (ADS)

    Leuteritz, G.; Lachmayer, R.

    2018-02-01

    Individually shaped light distributions become more and more important in lighting technologies and thus the importance of additively manufactured reflectors increases significantly. The vast field of applications ranges from automotive lighting to medical imaging and bolsters the statement. However, the surfaces of additively manufactured reflectors suffer from insufficient optical properties even when manufactured using optimized process parameters for the Selective Laser Melting (SLM) process. Therefore post-process treatments of reflectors are necessary in order to further enhance their optical quality. This work concentrates on the effectiveness of post-process procedures for reflective optics. Based on already optimized aluminum reflectors, which are manufactured with a SLM machine, the parts are differently machined after the SLM process. Selected finishing methods like laser polishing, sputtering or sand blasting are applied and their effects quantified and compared. The post-process procedures are investigated on their impact on surface roughness and reflectance as well as geometrical precision. For each finishing method a demonstrator will be created and compared to a fully milled sample and among themselves. Ultimately, guidelines are developed in order to figure out the optimal treatment of additively manufactured reflectors regarding their optical and geometrical properties. Simulations of the light distributions will be validated with the developed demonstrators.

  3. Computer animation of NASTRAN displacements on IRIS 4D-series workstations: CANDI/ANIMATE postprocessing of NASHUA results

    NASA Technical Reports Server (NTRS)

    Fales, Janine L.

    1991-01-01

    The capabilities of the postprocessing program CANDI (Color Animation of Nastran DIsplacements) were expanded to accept results from axisymmetric analysis. An auxiliary program, ANIMATE, was developed to allow color display of CANDI output on the IRIS 4D-series workstations. The user can interactively manipulate the graphics display by three-dimensional rotations, translations, and scaling through the use of the keyboard and/or dials box. The user can also specify what portion of the model is displayed. These developments are limited to the display of complex displacements calculated with the NASHUA/NASTRAN procedure for structural acoustics analysis.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Alfonsi; C. Rabiti; D. Mandelli

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less

  5. Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2017-03-01

    Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.

  6. Practical issues in quantum-key-distribution postprocessing

    NASA Astrophysics Data System (ADS)

    Fung, Chi-Hang Fred; Ma, Xiongfeng; Chau, H. F.

    2010-01-01

    Quantum key distribution (QKD) is a secure key generation method between two distant parties by wisely exploiting properties of quantum mechanics. In QKD, experimental measurement outcomes on quantum states are transformed by the two parties to a secret key. This transformation is composed of many logical steps (as guided by security proofs), which together will ultimately determine the length of the final secret key and its security. We detail the procedure for performing such classical postprocessing taking into account practical concerns (including the finite-size effect and authentication and encryption for classical communications). This procedure is directly applicable to realistic QKD experiments and thus serves as a recipe that specifies what postprocessing operations are needed and what the security level is for certain lengths of the keys. Our result is applicable to the BB84 protocol with a single or entangled photon source.

  7. Development and application of structural dynamics analysis capabilities

    NASA Technical Reports Server (NTRS)

    Heinemann, Klaus W.; Hozaki, Shig

    1994-01-01

    Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.

  8. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  9. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  10. Current use of imaging and electromagnetic source localization procedures in epilepsy surgery centers across Europe.

    PubMed

    Mouthaan, Brian E; Rados, Matea; Barsi, Péter; Boon, Paul; Carmichael, David W; Carrette, Evelien; Craiu, Dana; Cross, J Helen; Diehl, Beate; Dimova, Petia; Fabo, Daniel; Francione, Stefano; Gaskin, Vladislav; Gil-Nagel, Antonio; Grigoreva, Elena; Guekht, Alla; Hirsch, Edouard; Hecimovic, Hrvoje; Helmstaedter, Christoph; Jung, Julien; Kalviainen, Reetta; Kelemen, Anna; Kimiskidis, Vasilios; Kobulashvili, Teia; Krsek, Pavel; Kuchukhidze, Giorgi; Larsson, Pål G; Leitinger, Markus; Lossius, Morten I; Luzin, Roman; Malmgren, Kristina; Mameniskiene, Ruta; Marusic, Petr; Metin, Baris; Özkara, Cigdem; Pecina, Hrvoje; Quesada, Carlos M; Rugg-Gunn, Fergus; Rydenhag, Bertil; Ryvlin, Philippe; Scholly, Julia; Seeck, Margitta; Staack, Anke M; Steinhoff, Bernhard J; Stepanov, Valentin; Tarta-Arsene, Oana; Trinka, Eugen; Uzan, Mustafa; Vogt, Viola L; Vos, Sjoerd B; Vulliémoz, Serge; Huiskamp, Geertjan; Leijten, Frans S S; Van Eijsden, Pieter; Braun, Kees P J

    2016-05-01

    In 2014 the European Union-funded E-PILEPSY project was launched to improve awareness of, and accessibility to, epilepsy surgery across Europe. We aimed to investigate the current use of neuroimaging, electromagnetic source localization, and imaging postprocessing procedures in participating centers. A survey on the clinical use of imaging, electromagnetic source localization, and postprocessing methods in epilepsy surgery candidates was distributed among the 25 centers of the consortium. A descriptive analysis was performed, and results were compared to existing guidelines and recommendations. Response rate was 96%. Standard epilepsy magnetic resonance imaging (MRI) protocols are acquired at 3 Tesla by 15 centers and at 1.5 Tesla by 9 centers. Three centers perform 3T MRI only if indicated. Twenty-six different MRI sequences were reported. Six centers follow all guideline-recommended MRI sequences with the proposed slice orientation and slice thickness or voxel size. Additional sequences are used by 22 centers. MRI postprocessing methods are used in 16 centers. Interictal positron emission tomography (PET) is available in 22 centers; all using 18F-fluorodeoxyglucose (FDG). Seventeen centers perform PET postprocessing. Single-photon emission computed tomography (SPECT) is used by 19 centers, of which 15 perform postprocessing. Four centers perform neither PET nor SPECT in children. Seven centers apply magnetoencephalography (MEG) source localization, and nine apply electroencephalography (EEG) source localization. Fourteen combinations of inverse methods and volume conduction models are used. We report a large variation in the presurgical diagnostic workup among epilepsy surgery centers across Europe. This diversity underscores the need for high-quality systematic reviews, evidence-based recommendations, and harmonization of available diagnostic presurgical methods. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  11. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  12. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    PubMed

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  13. Post-processing techniques to enhance reliability of assignment algorithm based performance measures.

    DOT National Transportation Integrated Search

    2011-01-01

    This study develops an enhanced transportation planning framework by augmenting the sequential four-step : planning process with post-processing techniques. The post-processing techniques are incorporated through a feedback : mechanism and aim to imp...

  14. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  15. Evaluation of Transverse Thermal Stresses in Composite Plates Based on First-Order Shear Deformation Theory

    NASA Technical Reports Server (NTRS)

    Rolfes, R.; Noor, A. K.; Sparr, H.

    1998-01-01

    A postprocessing procedure is presented for the evaluation of the transverse thermal stresses in laminated plates. The analytical formulation is based on the first-order shear deformation theory and the plate is discretized by using a single-field displacement finite element model. The procedure is based on neglecting the derivatives of the in-plane forces and the twisting moments, as well as the mixed derivatives of the bending moments, with respect to the in-plane coordinates. The calculated transverse shear stiffnesses reflect the actual stacking sequence of the composite plate. The distributions of the transverse stresses through-the-thickness are evaluated by using only the transverse shear forces and the thermal effects resulting from the finite element analysis. The procedure is implemented into a postprocessing routine which can be easily incorporated into existing commercial finite element codes. Numerical results are presented for four- and ten-layer cross-ply laminates subjected to mechanical and thermal loads.

  16. Establishment of a high accuracy geoid correction model and geodata edge match

    NASA Astrophysics Data System (ADS)

    Xi, Ruifeng

    This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.

  17. Documentation of procedures for textural/spatial pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Bryant, W. F.

    1976-01-01

    A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.

  18. Recent Developments in Advanced Automated Post-Processing at AMOS

    DTIC Science & Technology

    2014-09-01

    Borelli KJS Consulting Lisa Thompson Air Force Research Laboratory ABSTRACT A new automated post-processing system has been developed to...the existing algorithms in addition to the development of new data processing features. 6. REFERENCES 1 Matson, C.L., Beckner, C.C., Borelli , K

  19. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  20. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  1. Post-processing of multi-hydrologic model simulations for improved streamflow projections

    NASA Astrophysics Data System (ADS)

    khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2016-04-01

    Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.

  2. On Fast Post-Processing of Global Positioning System Simulator Truth Data and Receiver Measurements and Solutions Data

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Day, John H. (Technical Monitor)

    2000-01-01

    Post-Processing of data related to a Global Positioning System (GPS) simulation is an important activity in qualification of a GPS receiver for space flight. Because a GPS simulator is a critical resource it is desirable to move off the pertinent simulation data from the simulator as soon as a test is completed. The simulator data files are usually moved to a Personal Computer (PC), where the post-processing of the receiver logged measurements and solutions data and simulated data is performed. Typically post-processing is accomplished using PC-based commercial software languages and tools. Because of commercial software systems generality their general-purpose functions are notoriously slow and more than often are the bottleneck problem even for short duration experiments. For example, it may take 8 hours to post-process data from a 6-hour simulation. There is a need to do post-processing faster, especially in order to use the previous test results as feedback for a next simulation setup. This paper demonstrates that a fast software linear interpolation algorithm is applicable to a large class of engineering problems, like GPS simulation data post-processing, where computational time is a critical resource and is one of the most important considerations. An approach is developed that allows to speed-up post-processing by an order of magnitude. It is based on improving the post-processing bottleneck interpolation algorithm using apriori information that is specific to the GPS simulation application. The presented post-processing scheme was used in support of a few successful space flight missions carrying GPS receivers. A future approach to solving the post-processing performance problem using Field Programmable Gate Array (FPGA) technology is described.

  3. Pre- and Post-Processing Tools to Streamline the CFD Process

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne Miller

    2002-01-01

    This viewgraph presentation provides information on software development tools to facilitate the use of CFD (Computational Fluid Dynamics) codes. The specific CFD codes FDNS and CORSAIR are profiled, and uses for software development tools with these codes during pre-processing, interim-processing, and post-processing are explained.

  4. Towards effective interactive three-dimensional colour postprocessing

    NASA Technical Reports Server (NTRS)

    Bailey, B. C.; Hajjar, J. F.; Abel, J. F.

    1986-01-01

    Recommendations for the development of effective three-dimensional, graphical color postprocessing are made. First, the evaluation of large, complex numerical models demands that a postprocessor be highly interactive. A menu of available functions should be provided and these operations should be performed quickly so that a sense of continuity and spontaneity exists during the post-processing session. Second, an agenda for three-dimensional color postprocessing is proposed. A postprocessor must be versatile with respect to application and basic algorithms must be designed so that they are flexible. A complete selection of tools is necessary to allow arbitrary specification of views, extraction of qualitative information, and access to detailed quantitative and problem information. Finally, full use of advanced display hardware is necessary if interactivity is to be maximized and effective postprocessing of today's numerical simulations is to be achieved.

  5. MRI Post-processing in Pre-surgical Evaluation

    PubMed Central

    Wang, Z. Irene; Alexopoulos, Andreas V.

    2016-01-01

    Purpose of Review Advanced MRI post-processing techniques are increasingly used to complement visual analysis and elucidate structural epileptogenic lesions. This review summarizes recent developments in MRI post-processing in the context of epilepsy pre-surgical evaluation, with the focus on patients with unremarkable MRI by visual analysis (i.e., “nonlesional” MRI). Recent Findings Various methods of MRI post-processing have been reported to show additional clinical values in the following areas: (1) lesion detection on an individual level; (2) lesion confirmation for reducing the risk of over reading the MRI; (3) detection of sulcal/gyral morphologic changes that are particularly difficult for visual analysis; and (4) delineation of cortical abnormalities extending beyond the visible lesion. Future directions to improve performance of MRI post-processing include using higher magnetic field strength for better signal and contrast to noise ratio, adopting a multi-contrast frame work, and integration with other noninvasive modalities. Summary MRI post-processing can provide essential value to increase the yield of structural MRI and should be included as part of the presurgical evaluation of nonlesional epilepsies. MRI post-processing allows for more accurate identification/delineation of cortical abnormalities, which should then be more confidently targeted and mapped. PMID:26900745

  6. Post processing of protein-compound docking for fragment-based drug discovery (FBDD): in-silico structure-based drug screening and ligand-binding pose prediction.

    PubMed

    Fukunishi, Yoshifumi

    2010-01-01

    For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.

  7. Development of upwind schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Chakravarthy, Sukumar R.

    1987-01-01

    Described are many algorithmic and computational aspects of upwind schemes and their second-order accurate formulations based on Total-Variation-Diminishing (TVD) approaches. An operational unification of the underlying first-order scheme is first presented encompassing Godunov's, Roe's, Osher's, and Split-Flux methods. For higher order versions, the preprocessing and postprocessing approaches to constructing TVD discretizations are considered. TVD formulations can be used to construct relaxation methods for unfactored implicit upwind schemes, which in turn can be exploited to construct space-marching procedures for even the unsteady Euler equations. A major part of the report describes time- and space-marching procedures for solving the Euler equations in 2-D, 3-D, Cartesian, and curvilinear coordinates. Along with many illustrative examples, several results of efficient computations on 3-D supersonic flows with subsonic pockets are presented.

  8. On Fast Post-Processing of Global Positioning System Simulator Truth Data and Receiver Measurements and Solutions Data

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Day, John H. (Technical Monitor)

    2000-01-01

    Post-processing of data, related to a GPS receiver test in a GPS simulator and test facility, is an important step towards qualifying a receiver for space flight. Although the GPS simulator provides all the parameters needed to analyze a simulation, as well as excellent analysis tools on the simulator workstation, post-processing is not a GPS simulator or receiver function alone, and it must be planned as a separate pre-flight test program requirement. A GPS simulator is a critical resource, and it is desirable to move off the pertinent test data from the simulator as soon as a test is completed. The receiver and simulator databases are used to extract the test data files for postprocessing. These files are then usually moved from the simulator and receiver systems to a personal computer (PC) platform, where post-processing is done typically using PC-based commercial software languages and tools. Because of commercial software systems generality their functions are notoriously slow and more than often are the bottleneck even for short duration simulator-based tests. There is a need to do post-processing faster and within an hour after test completion, including all required operations on the simulator and receiver to prepare and move off the post-processing files. This is especially significant in order to use the previous test feedback for the next simulation setup or to run near back-to-back simulation scenarios. Solving the post-processing timing problem is critical for a pre-flight test program success. Towards this goal an approach was developed that allows to speed-up post-processing by an order of a magnitude. It is based on improving the post-processing bottleneck function algorithm using a priory information that is specific to a GPS simulation application and using only the necessary volume of truth data. The presented postprocessing scheme was used in support of a few successful space flight missions carrying GPS receivers.

  9. Pre-operative Simulation of the Appropriate C-arm Position Using Computed Tomography Post-processing Software Reduces Radiation and Contrast Medium Exposure During EVAR Procedures.

    PubMed

    Stahlberg, E; Planert, M; Panagiotopoulos, N; Horn, M; Wiedner, M; Kleemann, M; Barkhausen, J; Goltz, J P

    2017-02-01

    The aim was to evaluate the feasibility and efficacy of a new method for pre-operative calculation of an appropriate C-arm position for iliac bifurcation visualisation during endovascular aortic repair (EVAR) procedures by using three dimensional computed tomography angiography (CTA) post-processing software. Post-processing software was used to simulate C-arm angulations in two dimensions (oblique, cranial/caudal) for appropriate visualisation of distal landing zones at the iliac bifurcation during EVAR. Retrospectively, 27 consecutive EVAR patients (25 men, mean ± SD age 73 ± 7 years) were identified; one group of patients (NEW; n = 12 [23 iliac bifurcations]) was compared after implementation of the new method with a group of patients who received a historic method (OLD; n = 15 [23 iliac bifurcations]), treated with EVAR before the method was applied. In the OLD group, a median of 2.0 (interquartile range [IQR] 1-3) digital subtraction angiography runs were needed per iliac bifurcation versus 1.0 (IQR 1-1) runs in the NEW group (p = .007). The median dose area products per iliac bifurcation were 11951 mGy*cm 2 (IQR 7308-16663 mGy*cm 2 ) for the NEW, and 39394 mGy*cm 2 (IQR 19066-53702 mGy*cm 2 ) for the OLD group, respectively (p = .001). The median volume of contrast per iliac bifurcation was 13.0 mL (IQR: 13-13 mL) in the NEW and 26 mL (IQR 13-39 mL) in the OLD group (p = .007). Pre-operative simulation of the appropriate C-arm angulation in two dimensions using dedicated computed tomography angiography post-processing software is feasible and significantly reduces radiation and contrast medium exposure. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  10. Automated removal of spurious intermediate cerebral blood flow volumes improves image quality among older patients: A clinical arterial spin labeling investigation.

    PubMed

    Shirzadi, Zahra; Crane, David E; Robertson, Andrew D; Maralani, Pejman J; Aviv, Richard I; Chappell, Michael A; Goldstein, Benjamin I; Black, Sandra E; MacIntosh, Bradley J

    2015-11-01

    To evaluate the impact of rejecting intermediate cerebral blood flow (CBF) images that are adversely affected by head motion during an arterial spin labeling (ASL) acquisition. Eighty participants were recruited, representing a wide age range (14-90 years) and heterogeneous cerebrovascular health conditions including bipolar disorder, chronic stroke, and moderate to severe white matter hyperintensities of presumed vascular origin. Pseudocontinuous ASL and T1 -weigthed anatomical images were acquired on a 3T scanner. ASL intermediate CBF images were included based on their contribution to the mean estimate, with the goal to maximize CBF detectability in gray matter (GM). Simulations were conducted to evaluate the performance of the proposed optimization procedure relative to other ASL postprocessing approaches. Clinical CBF images were also assessed visually by two experienced neuroradiologists. Optimized CBF images (CBFopt ) had significantly greater agreement with a synthetic ground truth CBF image and greater CBF detectability relative to the other ASL analysis methods (P < 0.05). Moreover, empirical CBFopt images showed a significantly improved signal-to-noise ratio relative to CBF images obtained from other postprocessing approaches (mean: 12.6%; range 1% to 56%; P < 0.001), and this improvement was age-dependent (P = 0.03). Differences between CBF images from different analysis procedures were not perceptible by visual inspection, while there was a moderate agreement between the ratings (κ = 0.44, P < 0.001). This study developed an automated head motion threshold-free procedure to improve the detection of CBF in GM. The improvement in CBF image quality was larger when considering older participants. © 2015 Wiley Periodicals, Inc.

  11. Correlation of enterohemorrhagic Escherichia coli O157 prevalence in feces, hides, and carcasses of beef cattle during processing.

    PubMed

    Elder, R O; Keen, J E; Siragusa, G R; Barkocy-Gallagher, G A; Koohmaraie, M; Laegreid, W W

    2000-03-28

    A survey was performed to estimate the frequency of enterohemorrhagic Escherichia coli O157:H7 or O157:nonmotile (EHEC O157) in feces and on hides within groups of fed cattle from single sources (lots) presented for slaughter at meat processing plants in the Midwestern United States, as well as frequency of carcass contamination during processing from cattle within the same lots. Of 29 lots sampled, 72% had at least one EHEC O157-positive fecal sample and 38% had positive hide samples. Overall, EHEC O157 prevalence in feces and on hides was 28% (91 of 327) and 11% (38 of 355), respectively. Carcass samples were taken at three points during processing: preevisceration, postevisceration before antimicrobial intervention, and postprocessing after carcasses entered the cooler. Of 30 lots sampled, 87% had at least one EHEC O157-positive preevisceration sample, 57% of lots were positive postevisceration, and 17% had positive postprocessing samples. Prevalence of EHEC O157 in the three postprocessing samples was 43% (148 of 341), 18% (59 of 332) and 2% (6 of 330), respectively. Reduction in carcass prevalence from preevisceration to postprocessing suggests that sanitary procedures were effective within the processing plants. Fecal and hide prevalence were significantly correlated with carcass contamination (P = 0.001), indicating a role for control of EHEC O157 in live cattle.

  12. Folding free energy surfaces of three small proteins under crowding: validation of the postprocessing method by direct simulation

    NASA Astrophysics Data System (ADS)

    Qin, Sanbo; Mittal, Jeetain; Zhou, Huan-Xiang

    2013-08-01

    We have developed a ‘postprocessing’ method for modeling biochemical processes such as protein folding under crowded conditions (Qin and Zhou 2009 Biophys. J. 97 12-19). In contrast to the direct simulation approach, in which the protein undergoing folding is simulated along with crowders, the postprocessing method requires only the folding simulation without crowders. The influence of the crowders is then obtained by taking conformations from the crowder-free simulation and calculating the free energies of transferring to the crowders. This postprocessing yields the folding free energy surface of the protein under crowding. Here the postprocessing results for the folding of three small proteins under ‘repulsive’ crowding are validated by those obtained previously by the direct simulation approach (Mittal and Best 2010 Biophys. J. 98 315-20). This validation confirms the accuracy of the postprocessing approach and highlights its distinct advantages in modeling biochemical processes under cell-like crowded conditions, such as enabling an atomistic representation of the test proteins.

  13. Medical three-dimensional printing opens up new opportunities in cardiology and cardiac surgery.

    PubMed

    Bartel, Thomas; Rivard, Andrew; Jimenez, Alejandro; Mestres, Carlos A; Müller, Silvana

    2018-04-14

    Advanced percutaneous and surgical procedures in structural and congenital heart disease require precise pre-procedural planning and continuous quality control. Although current imaging modalities and post-processing software assists with peri-procedural guidance, their capabilities for spatial conceptualization remain limited in two- and three-dimensional representations. In contrast, 3D printing offers not only improved visualization for procedural planning, but provides substantial information on the accuracy of surgical reconstruction and device implantations. Peri-procedural 3D printing has the potential to set standards of quality assurance and individualized healthcare in cardiovascular medicine and surgery. Nowadays, a variety of clinical applications are available showing how accurate 3D computer reformatting and physical 3D printouts of native anatomy, embedded pathology, and implants are and how they may assist in the development of innovative therapies. Accurate imaging of pathology including target region for intervention, its anatomic features and spatial relation to the surrounding structures is critical for selecting optimal approach and evaluation of procedural results. This review describes clinical applications of 3D printing, outlines current limitations, and highlights future implications for quality control, advanced medical education and training.

  14. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  15. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  16. Acquisition and Post-Processing of Immunohistochemical Images.

    PubMed

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  17. When Spreadsheets Become Software - Quality Control Challenges and Approaches - 13360

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fountain, Stefanie A.; Chen, Emmie G.; Beech, John F.

    2013-07-01

    As part of a preliminary waste acceptance criteria (PWAC) development, several commercial models were employed, including the Hydrologic Evaluation of Landfill Performance model (HELP) [1], the Disposal Unit Source Term - Multiple Species model (DUSTMS) [2], and the Analytical Transient One, Two, and Three-Dimensional model (AT123D) [3]. The results of these models were post-processed in MS Excel spreadsheets to convert the model results to alternate units, compare the groundwater concentrations to the groundwater concentration thresholds, and then to adjust the waste contaminant masses (based on average concentration over the waste volume) as needed in an attempt to achieve groundwater concentrationsmore » at the limiting point of assessment that would meet the compliance concentrations while maximizing the potential use of the landfill (i.e., maximizing the volume of projected waste being generated that could be placed in the landfill). During the course of the PWAC calculation development, one of the Microsoft (MS) Excel spreadsheets used to post-process the results of the commercial model packages grew to include more than 575,000 formulas across 18 worksheets. This spreadsheet was used to assess six base scenarios as well as nine uncertainty/sensitivity scenarios. The complexity of the spreadsheet resulted in the need for a rigorous quality control (QC) procedure to verify data entry and confirm the accuracy of formulas. (authors)« less

  18. Post-processing Seasonal Precipitation Forecasts via Integrating Climate Indices and the Analog Approach

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zhang, Y.; Wood, A.; Lee, H. S.; Wu, L.; Schaake, J. C.

    2016-12-01

    Seasonal precipitation forecasts are a primary driver for seasonal streamflow prediction that is critical for a range of water resources applications, such as reservoir operations and drought management. However, it is well known that seasonal precipitation forecasts from climate models are often biased and also too coarse in spatial resolution for hydrologic applications. Therefore, post-processing procedures such as downscaling and bias correction are often needed. In this presentation, we discuss results from a recent study that applies a two-step methodology to downscale and correct the ensemble mean precipitation forecasts from the Climate Forecast System (CFS). First, CFS forecasts are downscaled and bias corrected using monthly reforecast analogs: we identify past precipitation forecasts that are similar to the current forecast, and then use the finer-scale observational analysis fields from the corresponding dates to represent the post-processed ensemble forecasts. Second, we construct the posterior distribution of forecast precipitation from the post-processed ensemble by integrating climate indices: a correlation analysis is performed to identify dominant climate indices for the study region, which are then used to weight the analysis analogs selected in the first step using a Bayesian approach. The methodology is applied to the California Nevada River Forecast Center (CNRFC) and the Middle Atlantic River Forecast Center (MARFC) regions for 1982-2015, using the North American Land Data Assimilation System (NLDAS-2) precipitation as the analysis. The results from cross validation show that the post-processed CFS precipitation forecast are considerably more skillful than the raw CFS with the analog approach only. Integrating climate indices can further improve the skill if the number of ensemble members considered is large enough; however, the improvement is generally limited to the first couple of months when compared against climatology. Impacts of various factors such as ensemble size, lead time, and choice of climate indices will also be discussed.

  19. Possible world based consistency learning model for clustering and classifying uncertain data.

    PubMed

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Trends in the predictive performance of raw ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Scheuerer, Michael; Pappenberger, Florian; Bogner, Konrad; Haiden, Thomas

    2015-04-01

    Over the last two decades the paradigm in weather forecasting has shifted from being deterministic to probabilistic. Accordingly, numerical weather prediction (NWP) models have been run increasingly as ensemble forecasting systems. The goal of such ensemble forecasts is to approximate the forecast probability distribution by a finite sample of scenarios. Global ensemble forecast systems, like the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble, are prone to probabilistic biases, and are therefore not reliable. They particularly tend to be underdispersive for surface weather parameters. Hence, statistical post-processing is required in order to obtain reliable and sharp forecasts. In this study we apply statistical post-processing to ensemble forecasts of near-surface temperature, 24-hour precipitation totals, and near-surface wind speed from the global ECMWF model. Our main objective is to evaluate the evolution of the difference in skill between the raw ensemble and the post-processed forecasts. The ECMWF ensemble is under continuous development, and hence its forecast skill improves over time. Parts of these improvements may be due to a reduction of probabilistic bias. Thus, we first hypothesize that the gain by post-processing decreases over time. Based on ECMWF forecasts from January 2002 to March 2014 and corresponding observations from globally distributed stations we generate post-processed forecasts by ensemble model output statistics (EMOS) for each station and variable. Parameter estimates are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over rolling training periods that consist of the n days preceding the initialization dates. Given the higher average skill in terms of CRPS of the post-processed forecasts for all three variables, we analyze the evolution of the difference in skill between raw ensemble and EMOS forecasts. The fact that the gap in skill remains almost constant over time, especially for near-surface wind speed, suggests that improvements to the atmospheric model have an effect quite different from what calibration by statistical post-processing is doing. That is, they are increasing potential skill. Thus this study indicates that (a) further model development is important even if one is just interested in point forecasts, and (b) statistical post-processing is important because it will keep adding skill in the foreseeable future.

  1. Annual Research Briefs, 1992

    NASA Technical Reports Server (NTRS)

    Spinks, Debra (Compiler)

    1993-01-01

    This report contains the 1992 annual progress reports of the Research Fellows and students of the Center for Turbulence Research. Considerable effort was focused on the large eddy simulation technique for computing turbulent flows. This increased activity has been inspired by the recent predictive successes of the dynamic subgrid scale modeling procedure which was introduced during the 1990 Summer Program. Several Research Fellows and students are presently engaged in both the development of subgrid scale models and their applications to complex flows. The first group of papers in this report contain the findings of these studies. They are followed by reports grouped in the general areas of modeling, turbulence physics, and turbulent reacting flows. The last contribution in this report outlines the progress made on the development of the CTR post-processing facility.

  2. Micromechanics Analysis Code Post-Processing (MACPOST) User Guide. 1.0

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Comiskey, Michele D.; Bednarcyk, Brett A.

    1999-01-01

    As advanced composite materials have gained wider usage. the need for analytical models and computer codes to predict the thermomechanical deformation response of these materials has increased significantly. Recently, a micromechanics technique called the generalized method of cells (GMC) has been developed, which has the capability to fulfill this -oal. Tc provide a framework for GMC, the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) has been developed. As MAC/GMC has been updated, significant improvements have been made to the post-processing capabilities of the code. Through the MACPOST program, which operates directly within the MSC/PATRAN graphical pre- and post-processing package, a direct link between the analysis capabilities of MAC/GMC and the post-processing capabilities of MSC/PATRAN has been established. MACPOST has simplified the production, printing. and exportation of results for unit cells analyzed by MAC/GMC. MACPOST allows different micro-level quantities to be plotted quickly and easily in contour plots. In addition, meaningful data for X-Y plots can be examined. MACPOST thus serves as an important analysis and visualization tool for the macro- and micro-level data generated by MAC/GMC. This report serves as the user's manual for the MACPOST program.

  3. Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib; Siddique, Ridwan; Reed, Seann; Ahnert, Peter; Mendoza, Pablo; Mejia, Alfonso

    2018-03-01

    The relative roles of statistical weather preprocessing and streamflow postprocessing in hydrological ensemble forecasting at short- to medium-range forecast lead times (day 1-7) are investigated. For this purpose, a regional hydrologic ensemble prediction system (RHEPS) is developed and implemented. The RHEPS is comprised of the following components: (i) hydrometeorological observations (multisensor precipitation estimates, gridded surface temperature, and gauged streamflow); (ii) weather ensemble forecasts (precipitation and near-surface temperature) from the National Centers for Environmental Prediction 11-member Global Ensemble Forecast System Reforecast version 2 (GEFSRv2); (iii) NOAA's Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM); (iv) heteroscedastic censored logistic regression (HCLR) as the statistical preprocessor; (v) two statistical postprocessors, an autoregressive model with a single exogenous variable (ARX(1,1)) and quantile regression (QR); and (vi) a comprehensive verification strategy. To implement the RHEPS, 1 to 7 days weather forecasts from the GEFSRv2 are used to force HL-RDHM and generate raw ensemble streamflow forecasts. Forecasting experiments are conducted in four nested basins in the US Middle Atlantic region, ranging in size from 381 to 12 362 km2. Results show that the HCLR preprocessed ensemble precipitation forecasts have greater skill than the raw forecasts. These improvements are more noticeable in the warm season at the longer lead times (> 3 days). Both postprocessors, ARX(1,1) and QR, show gains in skill relative to the raw ensemble streamflow forecasts, particularly in the cool season, but QR outperforms ARX(1,1). The scenarios that implement preprocessing and postprocessing separately tend to perform similarly, although the postprocessing-alone scenario is often more effective. The scenario involving both preprocessing and postprocessing consistently outperforms the other scenarios. In some cases, however, the differences between this scenario and the scenario with postprocessing alone are not as significant. We conclude that implementing both preprocessing and postprocessing ensures the most skill improvements, but postprocessing alone can often be a competitive alternative.

  4. Self-calibrated humidity sensor in CMOS without post-processing.

    PubMed

    Nizhnik, Oleg; Higuchi, Kohei; Maenaka, Kazusuke

    2012-01-01

    A 1.1 μW power dissipation, voltage-output humidity sensor with 10% relative humidity accuracy was developed in the LFoundry 0.15 μm CMOS technology without post-processing. The sensor consists of a woven lateral array of electrodes implemented in CMOS top metal, a humidity-sensitive layer of Intervia Photodielectric 8023D-10, a CMOS capacitance to voltage converter, and the self-calibration circuitry.

  5. Color postprocessing for 3-dimensional finite element mesh quality evaluation and evolving graphical workstation

    NASA Technical Reports Server (NTRS)

    Panthaki, Malcolm J.

    1987-01-01

    Three general tasks on general-purpose, interactive color graphics postprocessing for three-dimensional computational mechanics were accomplished. First, the existing program (POSTPRO3D) is ported to a high-resolution device. In the course of this transfer, numerous enhancements are implemented in the program. The performance of the hardware was evaluated from the point of view of engineering postprocessing, and the characteristics of future hardware were discussed. Second, interactive graphical tools implemented to facilitate qualitative mesh evaluation from a single analysis. The literature was surveyed and a bibliography compiled. Qualitative mesh sensors were examined, and the use of two-dimensional plots of unaveraged responses on the surface of three-dimensional continua was emphasized in an interactive color raster graphics environment. Finally, a postprocessing environment was designed for state-of-the-art workstation technology. Modularity, personalization of the environment, integration of the engineering design processes, and the development and use of high-level graphics tools are some of the features of the intended environment.

  6. Rotational imaging optical coherence tomography for full-body mouse embryonic imaging

    PubMed Central

    Wu, Chen; Sudheendran, Narendran; Singh, Manmohan; Larina, Irina V.; Dickinson, Mary E.; Larin, Kirill V.

    2016-01-01

    Abstract. Optical coherence tomography (OCT) has been widely used to study mammalian embryonic development with the advantages of high spatial and temporal resolutions and without the need for any contrast enhancement probes. However, the limited imaging depth of traditional OCT might prohibit visualization of the full embryonic body. To overcome this limitation, we have developed a new methodology to enhance the imaging range of OCT in embryonic day (E) 9.5 and 10.5 mouse embryos using rotational imaging. Rotational imaging OCT (RI-OCT) enables full-body imaging of mouse embryos by performing multiangle imaging. A series of postprocessing procedures was performed on each cross-section image, resulting in the final composited image. The results demonstrate that RI-OCT is able to improve the visualization of internal mouse embryo structures as compared to conventional OCT. PMID:26848543

  7. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burge, S.W.

    This report describes the theory and structure of the FORCE2 flow program. The manual describes the governing model equations, solution procedure and their implementation in the computer program. FORCE2 is an extension of an existing B&V multidimensional, two-phase flow program. FORCE2 was developed for application to fluid beds by flow implementing a gas-solids modeling technology derived, in part, during a joint government -- industry research program, ``Erosion of FBC Heat Transfer Tubes,`` coordinated by Argonne National Laboratory. The development of FORCE2 was sponsored by ASEA-Babcock, an industry participant in this program. This manual is the principal documentation for the programmore » theory and organization. Program usage and post-processing of code predictions with the FORCE2 post-processor are described in a companion report, FORCE2 -- A Multidimensional Flow Program for Fluid Beds, User`s Guide. This manual is segmented into sections to facilitate its usage. In section 2.0, the mass and momentum conservation principles, the basis for the code, are presented. In section 3.0, the constitutive relations used in modeling gas-solids hydrodynamics are given. The finite-difference model equations are derived in section 4.0 and the solution procedures described in sections 5.0 and 6.0. Finally, the implementation of the model equations and solution procedure in FORCE2 is described in section 7.0.« less

  9. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  10. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  11. Investigation of Advanced Counterrotation Blade Configuration Concepts for High Speed Turboprop Systems. Task 3: Advanced Fan Section Grid Generator Final Report and Computer Program User's Manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1991-01-01

    A procedure is studied for generating three-dimensional grids for advanced turbofan engine fan section geometries. The procedure constructs a discrete mesh about engine sections containing the fan stage, an arbitrary number of axisymmetric radial flow splitters, a booster stage, and a bifurcated core/bypass flow duct with guide vanes. The mesh is an h-type grid system, the points being distributed with a transfinite interpolation scheme with axial and radial spacing being user specified. Elliptic smoothing of the grid in the meridional plane is a post-process option. The grid generation scheme is consistent with aerodynamic analyses utilizing the average-passage equation system developed by Dr. John Adamczyk of NASA Lewis. This flow solution scheme requires a series of blade specific grids each having a common axisymmetric mesh, but varying in the circumferential direction according to the geometry of the specific blade row.

  12. Data Release of UV to Submillimeter Broadband Fluxes for Simulated Galaxies from the EAGLE Project

    NASA Astrophysics Data System (ADS)

    Camps, Peter; Trčka, Ana; Trayford, James; Baes, Maarten; Theuns, Tom; Crain, Robert A.; McAlpine, Stuart; Schaller, Matthieu; Schaye, Joop

    2018-02-01

    We present dust-attenuated and dust emission fluxes for sufficiently resolved galaxies in the EAGLE suite of cosmological hydrodynamical simulations, calculated with the SKIRT radiative transfer code. The post-processing procedure includes specific components for star formation regions, stellar sources, and diffuse dust and takes into account stochastic heating of dust grains to obtain realistic broadband fluxes in the wavelength range from ultraviolet to submillimeter. The mock survey includes nearly half a million simulated galaxies with stellar masses above {10}8.5 {M}ȯ across six EAGLE models. About two-thirds of these galaxies, residing in 23 redshift bins up to z = 6, have a sufficiently resolved metallic gas distribution to derive meaningful dust attenuation and emission, with the important caveat that the same dust properties were used at all redshifts. These newly released data complement the already publicly available information about the EAGLE galaxies, which includes intrinsic properties derived by aggregating the properties of the smoothed particles representing matter in the simulation. We further provide an open-source framework of Python procedures for post-processing simulated galaxies with the radiative transfer code SKIRT. The framework allows any third party to calculate synthetic images, spectral energy distributions, and broadband fluxes for EAGLE galaxies, taking into account the effects of dust attenuation and emission.

  13. MRO DKF Post-Processing Tool

    NASA Technical Reports Server (NTRS)

    Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat

    2008-01-01

    This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.

  14. Evaluation of ensemble precipitation forecasts generated through post-processing in a Canadian catchment

    NASA Astrophysics Data System (ADS)

    Jha, Sanjeev K.; Shrestha, Durga L.; Stadnyk, Tricia A.; Coulibaly, Paulin

    2018-03-01

    Flooding in Canada is often caused by heavy rainfall during the snowmelt period. Hydrologic forecast centers rely on precipitation forecasts obtained from numerical weather prediction (NWP) models to enforce hydrological models for streamflow forecasting. The uncertainties in raw quantitative precipitation forecasts (QPFs) are enhanced by physiography and orography effects over a diverse landscape, particularly in the western catchments of Canada. A Bayesian post-processing approach called rainfall post-processing (RPP), developed in Australia (Robertson et al., 2013; Shrestha et al., 2015), has been applied to assess its forecast performance in a Canadian catchment. Raw QPFs obtained from two sources, Global Ensemble Forecasting System (GEFS) Reforecast 2 project, from the National Centers for Environmental Prediction, and Global Deterministic Forecast System (GDPS), from Environment and Climate Change Canada, are used in this study. The study period from January 2013 to December 2015 covered a major flood event in Calgary, Alberta, Canada. Post-processed results show that the RPP is able to remove the bias and reduce the errors of both GEFS and GDPS forecasts. Ensembles generated from the RPP reliably quantify the forecast uncertainty.

  15. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  16. Process and Post-Process: A Discursive History.

    ERIC Educational Resources Information Center

    Matsuda, Paul Kei

    2003-01-01

    Examines the history of process and post-process in composition studies, focusing on ways in which terms, such as "current-traditional rhetoric,""process," and "post-process" have contributed to the discursive construction of reality. Argues that use of the term post-process in the context of second language writing needs to be guided by a…

  17. Cerebral Microbleeds: A Field Guide to their Detection and Interpretation

    PubMed Central

    Greenberg, Steven M.; Vernooij, Meike W.; Cordonnier, Charlotte; Viswanathan, Anand; Salman, Rustam Al-Shahi; Warach, Steven; Launer, Lenore J.; Van Buchem, Mark A.; Breteler, Monique M.B.

    2012-01-01

    Summary Cerebral microbleeds (CMB) are increasingly recognized neuroimaging findings, occurring with cerebrovascular disease, dementia, and normal aging. Recent years have seen substantial progress, particularly in developing newer MRI methodologies for CMB detection and applying them to population-based elderly samples. This review focuses on these recent developments and their impact on two major questions: how CMB are detected, and how they should be interpreted. There is now ample evidence that prevalence and number of detected CMB varies with MRI characteristics such as pulse sequence, sequence parameters, spatial resolution, magnetic field strength, and post-processing, underlining the importance of MRI technique in interpreting studies. Recent investigations using sensitive techniques find the prevalence of CMB detected in community-dwelling elderly to be surprisingly high. We propose procedural guidelines for identifying CMB and suggest possible future approaches for elucidating the role of these common lesions as markers for, and potential contributors to, small vessel brain disease. PMID:19161908

  18. A new framework for analysing automated acoustic species-detection data: occupancy estimation and optimization of recordings post-processing

    USGS Publications Warehouse

    Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.

    2018-01-01

    The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.

  19. Verification of Ensemble Forecasts for the New York City Operations Support Tool

    NASA Astrophysics Data System (ADS)

    Day, G.; Schaake, J. C.; Thiemann, M.; Draijer, S.; Wang, L.

    2012-12-01

    The New York City water supply system operated by the Department of Environmental Protection (DEP) serves nine million people. It covers 2,000 square miles of portions of the Catskill, Delaware, and Croton watersheds, and it includes nineteen reservoirs and three controlled lakes. DEP is developing an Operations Support Tool (OST) to support its water supply operations and planning activities. OST includes historical and real-time data, a model of the water supply system complete with operating rules, and lake water quality models developed to evaluate alternatives for managing turbidity in the New York City Catskill reservoirs. OST will enable DEP to manage turbidity in its unfiltered system while satisfying its primary objective of meeting the City's water supply needs, in addition to considering secondary objectives of maintaining ecological flows, supporting fishery and recreation releases, and mitigating downstream flood peaks. The current version of OST relies on statistical forecasts of flows in the system based on recent observed flows. To improve short-term decision making, plans are being made to transition to National Weather Service (NWS) ensemble forecasts based on hydrologic models that account for short-term weather forecast skill, longer-term climate information, as well as the hydrologic state of the watersheds and recent observed flows. To ensure that the ensemble forecasts are unbiased and that the ensemble spread reflects the actual uncertainty of the forecasts, a statistical model has been developed to post-process the NWS ensemble forecasts to account for hydrologic model error as well as any inherent bias and uncertainty in initial model states, meteorological data and forecasts. The post-processor is designed to produce adjusted ensemble forecasts that are consistent with the DEP historical flow sequences that were used to develop the system operating rules. A set of historical hindcasts that is representative of the real-time ensemble forecasts is needed to verify that the post-processed forecasts are unbiased, statistically reliable, and preserve the skill inherent in the "raw" NWS ensemble forecasts. A verification procedure and set of metrics will be presented that provide an objective assessment of ensemble forecasts. The procedure will be applied to both raw ensemble hindcasts and to post-processed ensemble hindcasts. The verification metrics will be used to validate proper functioning of the post-processor and to provide a benchmark for comparison of different types of forecasts. For example, current NWS ensemble forecasts are based on climatology, using each historical year to generate a forecast trace. The NWS Hydrologic Ensemble Forecast System (HEFS) under development will utilize output from both the National Oceanic Atmospheric Administration (NOAA) Global Ensemble Forecast System (GEFS) and the Climate Forecast System (CFS). Incorporating short-term meteorological forecasts and longer-term climate forecast information should provide sharper, more accurate forecasts. Hindcasts from HEFS will enable New York City to generate verification results to validate the new forecasts and further fine-tune system operating rules. Project verification results will be presented for different watersheds across a range of seasons, lead times, and flow levels to assess the quality of the current ensemble forecasts.

  20. A new procedure of modal parameter estimation for high-speed digital image correlation

    NASA Astrophysics Data System (ADS)

    Huňady, Róbert; Hagara, Martin

    2017-09-01

    The paper deals with the use of 3D digital image correlation in determining modal parameters of mechanical systems. It is a non-contact optical method, which for the measurement of full-field spatial displacements and strains of bodies uses precise digital cameras with high image resolution. Most often this method is utilized for testing of components or determination of material properties of various specimens. In the case of using high-speed cameras for measurement, the correlation system is capable of capturing various dynamic behaviors, including vibration. This enables the potential use of the mentioned method in experimental modal analysis. For that purpose, the authors proposed a measuring chain for the correlation system Q-450 and developed a software application called DICMAN 3D, which allows the direct use of this system in the area of modal testing. The created application provides the post-processing of measured data and the estimation of modal parameters. It has its own graphical user interface, in which several algorithms for the determination of natural frequencies, mode shapes and damping of particular modes of vibration are implemented. The paper describes the basic principle of the new estimation procedure which is crucial in the light of post-processing. Since the FRF matrix resulting from the measurement is usually relatively large, the estimation of modal parameters directly from the FRF matrix may be time-consuming and may occupy a large part of computer memory. The procedure implemented in DICMAN 3D provides a significant reduction in memory requirements and computational time while achieving a high accuracy of modal parameters. Its computational efficiency is particularly evident when the FRF matrix consists of thousands of measurement DOFs. The functionality of the created software application is presented on a practical example in which the modal parameters of a composite plate excited by an impact hammer were determined. For the verification of the obtained results a verification experiment was conducted during which the vibration responses were measured using conventional acceleration sensors. In both cases MIMO analysis was realized.

  1. A three-image algorithm for hard x-ray grating interferometry.

    PubMed

    Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia

    2013-08-12

    A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.

  2. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  3. Comparison of breast percent density estimation from raw versus processed digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  4. A web service system supporting three-dimensional post-processing of medical images based on WADO protocol.

    PubMed

    He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian

    2015-02-01

    Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.

  5. Selective Laser Melting Produced Ti-6Al-4V: Post-Process Heat Treatments to Achieve Superior Tensile Properties

    PubMed Central

    Becker, Thorsten H.

    2018-01-01

    Current post-process heat treatments applied to selective laser melting produced Ti-6Al-4V do not achieve the same microstructure and therefore superior tensile behaviour of thermomechanical processed wrought Ti-6Al-4V. Due to the growing demand for selective laser melting produced parts in industry, research and development towards improved mechanical properties is ongoing. This study is aimed at developing post-process annealing strategies to improve tensile behaviour of selective laser melting produced Ti-6Al-4V parts. Optical and electron microscopy was used to study α grain morphology as a function of annealing temperature, hold time and cooling rate. Quasi-static uniaxial tensile tests were used to measure tensile behaviour of different annealed parts. It was found that elongated α’/α grains can be fragmented into equiaxial grains through applying a high temperature annealing strategy. It is shown that bi-modal microstructures achieve a superior tensile ductility to current heat treated selective laser melting produced Ti-6Al-4V samples. PMID:29342079

  6. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  7. Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.

    PubMed

    Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2014-11-25

    In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.

  8. Structural zooming research and development of an interactive computer graphical interface for stress analysis of cracks

    NASA Technical Reports Server (NTRS)

    Gerstle, Walter

    1989-01-01

    Engineering problems sometimes involve the numerical solution of boundary value problems over domains containing geometric feature with widely varying scales. Often, a detailed solution is required at one or more of these features. Small details in large structures may have profound effects upon global performance. Conversely, large-scale conditions may effect local performance. Many man-hours and CPU-hours are currently spent in modeling such problems. With the structural zooming technique, it is now possible to design an integrated program which allows the analyst to interactively focus upon a small region of interest, to modify the local geometry, and then to obtain highly accurate responses in that region which reflect both the properties of the overall structure and the local detail. A boundary integral equation analysis program, called BOAST, was recently developed for the stress analysis of cracks. This program can accurately analyze two-dimensional linear elastic fracture mechanics problems with far less computational effort than existing finite element codes. An interactive computer graphical interface to BOAST was written. The graphical interface would have several requirements: it would be menu-driven, with mouse input; all aspects of input would be entered graphically; the results of a BOAST analysis would be displayed pictorially but also the user would be able to probe interactively to get numerical values of displacement and stress at desired locations within the analysis domain; the entire procedure would be integrated into a single, easy to use package; and it would be written using calls to the graphic package called HOOPS. The program is nearing completion. All of the preprocessing features are working satisfactorily and were debugged. The postprocessing features are under development, and rudimentary postprocessing should be available by the end of the summer. The program was developed and run on a VAX workstation, and must be ported to the SUN workstation. This activity is currently underway.

  9. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  10. A new full-field digital mammography system with and without the use of an advanced post-processing algorithm: comparison of image quality and diagnostic performance.

    PubMed

    Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young

    2014-01-01

    To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.

  11. Role of cardiac imaging and three-dimensional printing in percutaneous appendage closure.

    PubMed

    Iriart, Xavier; Ciobotaru, Vlad; Martin, Claire; Cochet, Hubert; Jalal, Zakaria; Thambo, Jean-Benoit; Quessard, Astrid

    2018-06-06

    Atrial fibrillation is the most frequent cardiac arrhythmia, affecting up to 13% of people aged>80 years, and is responsible for 15-20% of all ischaemic strokes. Left atrial appendage occlusion devices have been developed as an alternative approach to reduce the risk of stroke in patients for whom oral anticoagulation is contraindicated. The procedure can be technically demanding, and obtaining a complete left atrial appendage occlusion can be challenging. These observations have emphasized the importance of preprocedural planning, to optimize the accuracy and safety of the procedure. In this setting, a multimodality imaging approach, including three-dimensional imaging, is often used for preoperative assessment and procedural guidance. These imaging modalities, including transoesophageal echocardiography and multislice computed tomography, allow acquisition of a three-dimensional dataset that improves understanding of the cardiac anatomy; dedicated postprocessing software integrated into the clinical workflow can be used to generate a stereolithography file, which can be printed in a rubber-like material, seeking to replicate the myocardial tissue characteristics and mechanical properties of the left atrial appendage wall. The role of multimodality imaging and 3D printing technology offers a new field for implantation simulation, which may have a major impact on physician training and technique optimization. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  12. Microstructure and Mechanical Properties of Microwave Post-processed Ni Coating

    NASA Astrophysics Data System (ADS)

    Zafar, Sunny; Sharma, Apurbba Kumar

    2017-03-01

    Flame-sprayed coatings are widely used in the industries attributed to their low cost and simple processing. However, the presence of porosity and poor adhesion with the substrate requires suitable post-processing of the as-sprayed deposits. In the present work, post-processing of the flame-sprayed Ni-based coating has been successfully attempted using microwave hybrid heating. Microwave post-processing of the flame-sprayed coatings was carried out at 2.45 GHz in a 1 kW multimode industrial microwave applicator. The microwave-processed and as-sprayed deposits were characterized for their microstructure, porosity, fracture toughness and surface roughness. The properties of the coatings were correlated with their abrasive wear behavior using a sliding abrasion test on a pin-on-disk tribometer. Microwave post-processing led to healed micropores and microcracks, thus causing homogenization of the microstructure in the coating layer. Therefore, microwave post-processed coating layer exhibits improved mechanical and tribological properties compared to the as-sprayed coating layer.

  13. Tools to Develop or Convert MOVES Inputs

    EPA Pesticide Factsheets

    The following tools are designed to help users develop inputs to MOVES and post-process the output. With the release of MOVES2014, EPA strongly encourages state and local agencies to develop local inputs based on MOVES fleet and activity categories.

  14. Pressure Mapping and Efficiency Analysis of an EPPLER 857 Hydrokinetic Turbine

    NASA Astrophysics Data System (ADS)

    Clark, Tristan

    A conceptual energy ship is presented to provide renewable energy. The ship, driven by the wind, drags a hydrokinetic turbine through the water. The power generated is used to run electrolysis on board, taking the resultant hydrogen back to shore to be used as an energy source. The basin efficiency (Power/thrust*velocity) of the Hydrokinetic Turbine (HTK) plays a vital role in this process. In order to extract the maximum allowable power from the flow, the blades need to be optimized. The structural analysis of the blade is important, as the blade will undergo high pressure loads from the water. A procedure for analysis of a preliminary Hydrokinetic Turbine blade design is developed. The blade was designed by a non-optimized Blade Element Momentum Theory (BEMT) code. Six simulations were run, with varying mesh resolution, turbulence models, and flow region size. The procedure was developed that provides detailed explanation for the entire process, from geometry and mesh generation to post-processing analysis tools. The efficiency results from the simulations are used to study the mesh resolution, flow region size, and turbulence models. The results are compared to the BEMT model design targets. Static pressure maps are created that can be used for structural analysis of the blades.

  15. BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.

    PubMed

    Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron

    2009-06-01

    BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).

  16. User manual for SPLASH (Single Panel Lamp and Shroud Helper).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, Marvin Elwood

    2006-02-01

    The radiant heat test facility develops test sets providing well-characterized thermal environments, often representing fires. Many of the components and procedures have become standardized to such an extent that the development of a specialized design tool to determine optimal configurations for radiant heat experiments was appropriate. SPLASH (Single Panel Lamp and Shroud Helper) is that tool. SPLASH is implemented as a user-friendly, Windows-based program that allows a designer to describe a test setup in terms of parameters such as number of lamps, power, position, and separation distance. This document is a user manual for that software. Any incidental descriptions ofmore » theory are only for the purpose of defining the model inputs. The theory for the underlying model is described in SAND2005-2947 (Ref. [1]). SPLASH provides a graphical user interface to define lamp panel and shroud designs parametrically, solves the resulting radiation enclosure problem for up to 2500 surfaces, and provides post-processing to facilitate understanding and documentation of analyzed designs.« less

  17. Cerebral microbleeds: a guide to detection and interpretation.

    PubMed

    Greenberg, Steven M; Vernooij, Meike W; Cordonnier, Charlotte; Viswanathan, Anand; Al-Shahi Salman, Rustam; Warach, Steven; Launer, Lenore J; Van Buchem, Mark A; Breteler, Monique Mb

    2009-02-01

    Cerebral microbleeds (CMBs) are increasingly recognised neuroimaging findings in individuals with cerebrovascular disease and dementia, and in normal ageing. There has been substantial progress in the understanding of CMBs in recent years, particularly in the development of newer MRI methods for the detection of CMBs and the application of these techniques to population-based samples of elderly people. In this Review, we focus on these recent developments and their effects on two main questions: how CMBs are detected, and how CMBs should be interpreted. The number of CMBs detected depends on MRI characteristics, such as pulse sequence, sequence parameters, spatial resolution, magnetic field strength, and image post-processing, emphasising the importance of taking into account MRI technique in the interpretation of study results. Recent investigations with sensitive MRI techniques have indicated a high prevalence of CMBs in community-dwelling elderly people. We propose a procedural guide for identification of CMBs and suggest possible future approaches for elucidating the role of these common lesions as markers for, and contributors to, small-vessel brain disease.

  18. Limitations on post-processing assisted quantum programming

    NASA Astrophysics Data System (ADS)

    Heinosaari, Teiko; Miyadera, Takayuki; Tukiainen, Mikko

    2017-03-01

    A quantum multimeter is a programmable device that can implement measurements of different observables depending on the programming quantum state inserted into it. The advantage of this arrangement over a single-purpose device is in its versatility: one can realize various measurements simply by changing the programming state. The classical manipulation of measurement output data is known as post-processing. In this work we study the post-processing assisted quantum programming, which is a protocol where quantum programming and classical post-processing are combined. We provide examples showing that these two processes combined can be more efficient than either of them used separately. Furthermore, we derive an inequality relating the programming resources to their corresponding programmed observables, thereby enabling us to study the limitations on post-processing assisted quantum programming.

  19. Evaluation of a Post-Processing Approach for Multiscale Analysis of Biphasic Mechanics of Chondrocytes

    PubMed Central

    Sibole, Scott C.; Maas, Steve; Halloran, Jason P.; Weiss, Jeffrey A.; Erdemir, Ahmet

    2014-01-01

    Understanding the mechanical behavior of chondrocytes as a result of cartilage tissue mechanics has significant implications for both evaluation of mechanobiological function and to elaborate on damage mechanisms. A common procedure for prediction of chondrocyte mechanics (and of cell mechanics in general) relies on a computational post-processing approach where tissue level deformations drive cell level models. Potential loss of information in this numerical coupling approach may cause erroneous cellular scale results, particularly during multiphysics analysis of cartilage. The goal of this study was to evaluate the capacity of 1st and 2nd order data passing to predict chondrocyte mechanics by analyzing cartilage deformations obtained for varying complexity of loading scenarios. A tissue scale model with a sub-region incorporating representation of chondron size and distribution served as control. The postprocessing approach first required solution of a homogeneous tissue level model, results of which were used to drive a separate cell level model (same characteristics as the subregion of control model). The 1st data passing appeared to be adequate for simplified loading of the cartilage and for a subset of cell deformation metrics, e.g., change in aspect ratio. The 2nd order data passing scheme was more accurate, particularly when asymmetric permeability of the tissue boundaries were considered. Yet, the method exhibited limitations for predictions of instantaneous metrics related to the fluid phase, e.g., mass exchange rate. Nonetheless, employing higher-order data exchange schemes may be necessary to understand the biphasic mechanics of cells under lifelike tissue loading states for the whole time history of the simulation. PMID:23809004

  20. Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grierson, B. A.; Yuan, X.; Gorelenkova, M.

    TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less

  1. Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT

    DOE PAGES

    Grierson, B. A.; Yuan, X.; Gorelenkova, M.; ...

    2018-02-21

    TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less

  2. New bone post-processing tools in forensic imaging: a multi-reader feasibility study to evaluate detection time and diagnostic accuracy in rib fracture assessment.

    PubMed

    Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David

    2017-03-01

    The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.

  3. Postprocessing of docked protein-ligand complexes using implicit solvation models.

    PubMed

    Lindström, Anton; Edvinsson, Lotta; Johansson, Andreas; Andersson, C David; Andersson, Ida E; Raubacher, Florian; Linusson, Anna

    2011-02-28

    Molecular docking plays an important role in drug discovery as a tool for the structure-based design of small organic ligands for macromolecules. Possible applications of docking are identification of the bioactive conformation of a protein-ligand complex and the ranking of different ligands with respect to their strength of binding to a particular target. We have investigated the effect of implicit water on the postprocessing of binding poses generated by molecular docking using MM-PB/GB-SA (molecular mechanics Poisson-Boltzmann and generalized Born surface area) methodology. The investigation was divided into three parts: geometry optimization, pose selection, and estimation of the relative binding energies of docked protein-ligand complexes. Appropriate geometry optimization afforded more accurate binding poses for 20% of the complexes investigated. The time required for this step was greatly reduced by minimizing the energy of the binding site using GB solvation models rather than minimizing the entire complex using the PB model. By optimizing the geometries of docking poses using the GB(HCT+SA) model then calculating their free energies of binding using the PB implicit solvent model, binding poses similar to those observed in crystal structures were obtained. Rescoring of these poses according to their calculated binding energies resulted in improved correlations with experimental binding data. These correlations could be further improved by applying the postprocessing to several of the most highly ranked poses rather than focusing exclusively on the top-scored pose. The postprocessing protocol was successfully applied to the analysis of a set of Factor Xa inhibitors and a set of glycopeptide ligands for the class II major histocompatibility complex (MHC) A(q) protein. These results indicate that the protocol for the postprocessing of docked protein-ligand complexes developed in this paper may be generally useful for structure-based design in drug discovery.

  4. Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide

    NASA Astrophysics Data System (ADS)

    Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.

    Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

  5. Discrete post-processing of total cloud cover ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian

    2017-04-01

    This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.

  6. 4D flow mri post-processing strategies for neuropathologies

    NASA Astrophysics Data System (ADS)

    Schrauben, Eric Mathew

    4D flow MRI allows for the measurement of a dynamic 3D velocity vector field. Blood flow velocities in large vascular territories can be qualitatively visualized with the added benefit of quantitative probing. Within cranial pathologies theorized to have vascular-based contributions or effects, 4D flow MRI provides a unique platform for comprehensive assessment of hemodynamic parameters. Targeted blood flow derived measurements, such as flow rate, pulsatility, retrograde flow, or wall shear stress may provide insight into the onset or characterization of more complex neuropathologies. Therefore, the thorough assessment of each parameter within the context of a given disease has important medical implications. Not surprisingly, the last decade has seen rapid growth in the use of 4D flow MRI. Data acquisition sequences are available to researchers on all major scanner platforms. However, the use has been limited mostly to small research trials. One major reason that has hindered the more widespread use and application in larger clinical trials is the complexity of the post-processing tasks and the lack of adequate tools for these tasks. Post-processing of 4D flow MRI must be semi-automated, fast, user-independent, robust, and reliably consistent for use in a clinical setting, within large patient studies, or across a multicenter trial. Development of proper post-processing methods coupled with systematic investigation in normal and patient populations pushes 4D flow MRI closer to clinical realization while elucidating potential underlying neuropathological origins. Within this framework, the work in this thesis assesses venous flow reproducibility and internal consistency in a healthy population. A preliminary analysis of venous flow parameters in healthy controls and multiple sclerosis patients is performed in a large study employing 4D flow MRI. These studies are performed in the context of the chronic cerebrospinal venous insufficiency hypothesis. Additionally, a double-gated flow acquisition and reconstruction scheme demonstrates respiratory-induced changes in internal jugular vein flow. Finally, a semi-automated intracranial vessel segmentation and flow parameter measurement software tool for fast and consistent 4D flow post-processing analysis is developed, validated, and exhibited an in-vivo.

  7. Reliable probabilities through statistical post-processing of ensemble predictions

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2013-04-01

    We develop post-processing or calibration approaches based on linear regression that make ensemble forecasts more reliable. We enforce climatological reliability in the sense that the total variability of the prediction is equal to the variability of the observations. Second, we impose ensemble reliability such that the spread around the ensemble mean of the observation coincides with the one of the ensemble members. In general the attractors of the model and reality are inhomogeneous. Therefore ensemble spread displays a variability not taken into account in standard post-processing methods. We overcome this by weighting the ensemble by a variable error. The approaches are tested in the context of the Lorenz 96 model (Lorenz 1996). The forecasts become more reliable at short lead times as reflected by a flatter rank histogram. Our best method turns out to be superior to well-established methods like EVMOS (Van Schaeybroeck and Vannitsem, 2011) and Nonhomogeneous Gaussian Regression (Gneiting et al., 2005). References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Lorenz, E. N., 1996: Predictability - a problem partly solved. Proceedings, Seminar on Predictability ECMWF. 1, 1-18. [3] Van Schaeybroeck, B., and S. Vannitsem, 2011: Post-processing through linear regression, Nonlin. Processes Geophys., 18, 147.

  8. Basic temperature correction of QWIP cameras in thermoelastic/plastic tests of composite materials.

    PubMed

    Boccardi, Simone; Carlomagno, Giovanni Maria; Meola, Carosena

    2016-12-01

    The present work is concerned with the use of a quantum well infrared photodetector (QWIP) infrared camera to measure very small temperature variations, which are related to thermoelastic/plastic effects, developing on composites under relatively low loads, either periodic or due to impact. As is evident from previous work, some temperature variations are difficult to measure, being at the edge of the IR camera resolution and/or affected by the instrument noise. Conversely, they may be valuable to get either information about the material characteristics and its behavior under periodic load (thermoelastic), or to assess the overall extension of delaminations due to impact (thermo-plastic). An image post-processing procedure is herein described that, with the help of a reference signal, allows for suppression of the instrument noise and better discrimination of thermal signatures induced by the two different loads.

  9. GPS in dynamic monitoring of long-period structures

    USGS Publications Warehouse

    Celebi, M.

    2000-01-01

    Global Positioning System (GPS) technology with high sampling rates (??? 10 samples per second) allows scientifically justified and economically feasible dynamic measurements of relative displacements of long-period structures-otherwise difficult to measure directly by other means, such as the most commonly used accelerometers that require post-processing including double integration. We describe an experiment whereby the displacement responses of a simulated tall building are measured clearly and accurately in real-time. Such measurements can be used to assess average drift ratios and changes in dynamic characteristics, and therefore can be used by engineers and building owners or managers to assess the building performance during extreme motions caused by earthquakes and strong winds. By establishing threshold displacements or drift ratios and identifying changing dynamic characteristics, procedures can be developed to use such information to secure public safety and/or take steps to improve the performance of the building. Published by Elsevier Science Ltd.

  10. Evaluation of computed tomography post-processing images in postoperative assessment of Lisfranc injuries compared with plain radiographs.

    PubMed

    Li, Haobo; Chen, Yanxi; Qiang, Minfei; Zhang, Kun; Jiang, Yuchen; Zhang, Yijie; Jia, Xiaoyang

    2017-06-14

    The objective of this study is to evaluate the value of computed tomography (CT) post-processing images in postoperative assessment of Lisfranc injuries compared with plain radiographs. A total of 79 cases with closed Lisfranc injuries that were treated with conventional open reduction and internal fixation from January 2010 to June 2016 were analyzed. Postoperative assessment was performed by two independent orthopedic surgeons with both plain radiographs and CT post-processing images. Inter- and intra-observer agreement were analyzed by kappa statistics while the differences between the two postoperative imaging assessments were assessed using the χ 2 test (McNemar's test). Significance was assumed when p < 0.05. Inter- and intra-observer agreement of CT post-processing images was much higher than that of plain radiographs. Non-anatomic reduction was more easily identified in patients with injuries of Myerson classifications A, B1, B2, and C1 using CT post-processing images with overall groups (p < 0.05), and poor internal fixation was also more easily detected in patients with injuries of Myerson classifications A, B1, B2, and C2 using CT post-processing images with overall groups (p < 0.05). CT post-processing images can be more reliable than plain radiographs in the postoperative assessment of reduction and implant placement for Lisfranc injuries.

  11. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  12. Lessons Learned from Optical Payload for Lasercomm Science (OPALS) Mission Operations

    NASA Technical Reports Server (NTRS)

    Sindiy, Oleg V.; Abrahamson, Matthew J.; Biswas, Abhijit; Wright, Malcolm W.; Padams, Jordan H.; Konyha, Alexander L.

    2015-01-01

    This paper provides an overview of Optical Payload for Lasercomm Science (OPALS) activities and lessons learned during mission operations. Activities described cover the periods of commissioning, prime, and extended mission operations, during which primary and secondary mission objectives were achieved for demonstrating space-to-ground optical communications. Lessons learned cover Mission Operations System topics in areas of: architecture verification and validation, staffing, mission support area, workstations, workstation tools, interfaces with support services, supporting ground stations, team training, procedures, flight software upgrades, post-processing tools, and public outreach.

  13. Current Direction and Velocity Measurements Using GPS Receivers Mounted on Floats at Tom Bevill Lock and Dam

    DTIC Science & Technology

    2002-12-01

    radio and batteries. The procedures outlined in this CHETN will concentrate on the Magellan GPS ProMARK X-CP receiver as it was used to collect...The Magellan GPS ProMARK X-CP is a small robust light receiver that can log 9 hr of both pseudorange and carrier phase satellite data for post...post- processing software, pseudorange GPS data recorded by the ProMARK X-CP can be post-processed differential to achieve 1-3 m (3.3-9.8 ft) horizontal

  14. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2017-04-01

    Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.

  15. A Format for Phylogenetic Placements

    PubMed Central

    Matsen, Frederick A.; Hoffman, Noah G.; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement. PMID:22383988

  16. A format for phylogenetic placements.

    PubMed

    Matsen, Frederick A; Hoffman, Noah G; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.

  17. Realistic Simulations of Coronagraphic Observations with WFIRST

    NASA Astrophysics Data System (ADS)

    Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)

    2018-01-01

    We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.

  18. Novel Overhang Support Designs for Powder-Based Electron Beam Additive Manufacturing (EBAM)

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy A.

    2014-01-01

    NASA Marshall Space Flight Center, in collaboration with the University of Alabama, has developed a contact-free support structure used to fabricate overhang-type geometries via EBAM. The support structure is used for 3-D metal-printed components for the aerospace, automotive, biomedical and other industries. Current techniques use support structures to address deformation challenges inherent in 3-D metal printing. However, these structures (overhangs) are bonded to the component and need to be removed in post-processing using a mechanical tool. This new technology improves the overhang support structure design for components by eliminating associated geometric defects and post-processing requirements.

  19. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  20. Computational System For Rapid CFD Analysis In Engineering

    NASA Technical Reports Server (NTRS)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  1. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing

    NASA Astrophysics Data System (ADS)

    Oliveira, R. A.; Khoramshahi, E.; Suomalainen, J.; Hakala, T.; Viljanen, N.; Honkavaara, E.

    2018-05-01

    The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites.

  2. Automated detection of cloud and cloud-shadow in single-date Landsat imagery using neural networks and spatial post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael J.; Hayes, Daniel J

    2014-01-01

    Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less

  3. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  4. Comparison of Absolute Apparent Diffusion Coefficient (ADC) Values in ADC Maps Generated Across Different Postprocessing Software: Reproducibility in Endometrial Carcinoma.

    PubMed

    Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan

    2017-12-01

    Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.

  5. Transmodal comparison of auditory, motor, and visual post-processing with and without intentional short-term memory maintenance.

    PubMed

    Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias

    2010-12-01

    To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Nowcasting Cloud Fields for U.S. Air Force Special Operations

    DTIC Science & Technology

    2017-03-01

    application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES

  7. Characterization of Adipose Tissue Product Quality Using Measurements of Oxygen Consumption Rate.

    PubMed

    Suszynski, Thomas M; Sieber, David A; Mueller, Kathryn; Van Beek, Allen L; Cunningham, Bruce L; Kenkel, Jeffrey M

    2018-03-14

    Fat grafting is a common procedure in plastic surgery but associated with unpredictable graft retention. Adipose tissue (AT) "product" quality is affected by the methods used for harvest, processing and transfer, which vary widely amongst surgeons. Currently, there is no method available to accurately assess the quality of AT. In this study, we present a novel method for the assessment of AT product quality through direct measurements of oxygen consumption rate (OCR). OCR has exhibited potential in predicting outcomes following pancreatic islet transplant. Our study aim was to reapportion existing technology for its use with AT preparations and to confirm that these measurements are feasible. OCR was successfully measured for en bloc and postprocessed AT using a stirred microchamber system. OCR was then normalized to DNA content (OCR/DNA), which represents the AT product quality. Mean (±SE) OCR/DNA values for fresh en bloc and post-processed AT were 149.8 (± 9.1) and 61.1 (± 6.1) nmol/min/mg DNA, respectively. These preliminary data suggest that: (1) OCR and OCR/DNA measurements of AT harvested using conventional protocol are feasible; and (2) standard AT processing results in a decrease in overall AT product quality. OCR measurements of AT using existing technology can be done and enables accurate, real-time, quantitative assessment of the quality of AT product prior to transfer. The availability and further validation of this type of assay could enable optimization of fat grafting protocol by providing a tool for the more detailed study of procedural variables that affect AT product quality.

  8. Alternative Post-Processing on a CMOS Chip to Fabricate a Planar Microelectrode Array

    PubMed Central

    López-Huerta, Francisco; Herrera-May, Agustín L.; Estrada-López, Johan J.; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S.

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+-type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications. PMID:22346681

  9. Alternative post-processing on a CMOS chip to fabricate a planar microelectrode array.

    PubMed

    López-Huerta, Francisco; Herrera-May, Agustín L; Estrada-López, Johan J; Zuñiga-Islas, Carlos; Cervantes-Sanchez, Blanca; Soto, Enrique; Soto-Cruz, Blanca S

    2011-01-01

    We present an alternative post-processing on a CMOS chip to release a planar microelectrode array (pMEA) integrated with its signal readout circuit, which can be used for monitoring the neuronal activity of vestibular ganglion neurons in newborn Wistar strain rats. This chip is fabricated through a 0.6 μm CMOS standard process and it has 12 pMEA through a 4 × 3 electrodes matrix. The alternative CMOS post-process includes the development of masks to protect the readout circuit and the power supply pads. A wet etching process eliminates the aluminum located on the surface of the p+ -type silicon. This silicon is used as transducer for recording the neuronal activity and as interface between the readout circuit and neurons. The readout circuit is composed of an amplifier and tunable bandpass filter, which is placed on a 0.015 mm2 silicon area. The tunable bandpass filter has a bandwidth of 98 kHz and a common mode rejection ratio (CMRR) of 87 dB. These characteristics of the readout circuit are appropriate for neuronal recording applications.

  10. Reliability of fully automated versus visually controlled pre- and post-processing of resting-state EEG.

    PubMed

    Hatz, F; Hardmeier, M; Bousleiman, H; Rüegg, S; Schindler, C; Fuhr, P

    2015-02-01

    To compare the reliability of a newly developed Matlab® toolbox for the fully automated, pre- and post-processing of resting state EEG (automated analysis, AA) with the reliability of analysis involving visually controlled pre- and post-processing (VA). 34 healthy volunteers (age: median 38.2 (20-49), 82% female) had three consecutive 256-channel resting-state EEG at one year intervals. Results of frequency analysis of AA and VA were compared with Pearson correlation coefficients, and reliability over time was assessed with intraclass correlation coefficients (ICC). Mean correlation coefficient between AA and VA was 0.94±0.07, mean ICC for AA 0.83±0.05 and for VA 0.84±0.07. AA and VA yield very similar results for spectral EEG analysis and are equally reliable. AA is less time-consuming, completely standardized, and independent of raters and their training. Automated processing of EEG facilitates workflow in quantitative EEG analysis. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Scripting MODFLOW model development using Python and FloPy

    USGS Publications Warehouse

    Bakker, Mark; Post, Vincent E. A.; Langevin, Christian D.; Hughes, Joseph D.; White, Jeremy; Starn, Jeffrey; Fienen, Michael N.

    2016-01-01

    Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy.

  12. Active non-volatile memory post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish

    A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.

  13. An anatomical review of spinal cord blood supply.

    PubMed

    Melissano, G; Bertoglio, L; Rinaldi, E; Leopardi, M; Chiesa, R

    2015-10-01

    Knowledge of the spinal cord (SC) vascular supply is important in patients undergoing procedures that involve the thoracic and thoracoabdominal aorta. However, the SC vasculature has a complex anatomy, and teaching is often based only on anatomical sketches with highly variable accuracy; historically, this has required a "leap of faith" on the part of aortic surgeons. Fortunately, this "leap of faith" is no longer necessary given recent breakthroughs in imaging technologies and postprocessing software. Imaging methods have expanded the non-invasive diagnostic ability to determine a patient's SC vascular pattern, particularly in detecting the presence and location of the artery of Adamkiewicz. CT is the imaging modality of choice for most patients with thoracic and thoracoabdominal aortic disease, proving especially useful in the determination of feasibility and planning of endovascular treatment. Thus the data set required for analysis of SC vascular anatomy is usually already available. We have concentrated our efforts on CT angiography, which offers particularly good imaging capabilities with state-of-the-art multidetector scanners. Multidetector row helical CT provides examinations of an extensive range in the craniocaudal direction with thin collimation in a short time interval, giving excellent temporal and spatial resolution. This paper provides examples of the SC vasculature imaging quality that can be obtained with 64 row scanners and appropriate postprocessing. Knowledge of the principal anatomical features of the SC blood supply of individual patients undergoing open or endovascular thoracoabdominal procedures has several potential benefits. For open surgery, analysis of the SC vasculature could tell us the aortic region that feeds the Adamkiewicz artery and thus needs to be reimplanted. For endovascular procedures, we can determine whether the stent-graft will cover the Adamkiewicz artery, thus avoiding unnecessary coverage. CT data can also be used to stratify risk of SC ischemia and guide the selective use of spinal cord injury prevention strategies.

  14. False colors removal on the YCr-Cb color space

    NASA Astrophysics Data System (ADS)

    Tomaselli, Valeria; Guarnera, Mirko; Messina, Giuseppe

    2009-01-01

    Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image. Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge emphasis process.

  15. Wind Tunnel Experiments to Study Chaparral Crown Fires.

    PubMed

    Cobian-Iñiguez, Jeanette; Aminfar, AmirHessam; Chong, Joey; Burke, Gloria; Zuniga, Albertina; Weise, David R; Princevac, Marko

    2017-11-14

    The present protocol presents a laboratory technique designed to study chaparral crown fire ignition and spread. Experiments were conducted in a low velocity fire wind tunnel where two distinct layers of fuel were constructed to represent surface and crown fuels in chaparral. Chamise, a common chaparral shrub, comprised the live crown layer. The dead fuel surface layer was constructed with excelsior (shredded wood). We developed a methodology to measure mass loss, temperature, and flame height for both fuel layers. Thermocouples placed in each layer estimated temperature. A video camera captured the visible flame. Post-processing of digital imagery yielded flame characteristics including height and flame tilt. A custom crown mass loss instrument developed in-house measured the evolution of the mass of the crown layer during the burn. Mass loss and temperature trends obtained using the technique matched theory and other empirical studies. In this study, we present detailed experimental procedures and information about the instrumentation used. The representative results for the fuel mass loss rate and temperature filed within the fuel bed are also included and discussed.

  16. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Python Open source Waveform ExtractoR (POWER): an open source, Python package to monitor and post-process numerical relativity simulations

    NASA Astrophysics Data System (ADS)

    Johnson, Daniel; Huerta, E. A.; Haas, Roland

    2018-01-01

    Numerical simulations of Einstein’s field equations provide unique insights into the physics of compact objects moving at relativistic speeds, and which are driven by strong gravitational interactions. Numerical relativity has played a key role to firmly establish gravitational wave astrophysics as a new field of research, and it is now paving the way to establish whether gravitational wave radiation emitted from compact binary mergers is accompanied by electromagnetic and astro-particle counterparts. As numerical relativity continues to blend in with routine gravitational wave data analyses to validate the discovery of gravitational wave events, it is essential to develop open source tools to streamline these studies. Motivated by our own experience as users and developers of the open source, community software, the Einstein Toolkit, we present an open source, Python package that is ideally suited to monitor and post-process the data products of numerical relativity simulations, and compute the gravitational wave strain at future null infinity in high performance environments. We showcase the application of this new package to post-process a large numerical relativity catalog and extract higher-order waveform modes from numerical relativity simulations of eccentric binary black hole mergers and neutron star mergers. This new software fills a critical void in the arsenal of tools provided by the Einstein Toolkit consortium to the numerical relativity community.

  18. Regionalization of post-processed ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-05-01

    For many years, meteorological models have been run with perturbated initial conditions or parameters to produce ensemble forecasts that are used as a proxy of the uncertainty of the forecasts. However, the ensembles are usually both biased (the mean is systematically too high or too low, compared with the observed weather), and has dispersion errors (the ensemble variance indicates a too low or too high confidence in the forecast, compared with the observed weather). The ensembles are therefore commonly post-processed to correct for these shortcomings. Here we look at one of these techniques, referred to as Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). Originally, the post-processing parameters were identified as a fixed set of parameters for a region. The application of our work is the European Flood Awareness System (http://www.efas.eu), where a distributed model is run with meteorological ensembles as input. We are therefore dealing with a considerably larger data set than previous analyses. We also want to regionalize the parameters themselves for other locations than the calibration gauges. The post-processing parameters are therefore estimated for each calibration station, but with a spatial penalty for deviations from neighbouring stations, depending on the expected semivariance between the calibration catchment and these stations. The estimated post-processed parameters can then be used for regionalization of the postprocessing parameters also for uncalibrated locations using top-kriging in the rtop-package (Skøien et al., 2006, 2014). We will show results from cross-validation of the methodology and although our interest is mainly in identifying exceedance probabilities for certain return levels, we will also show how the rtop package can be used for creating a set of post-processed ensembles through simulations.

  19. Detection and characterization of lesions on low-radiation-dose abdominal CT images postprocessed with noise reduction filters.

    PubMed

    Kalra, Mannudeep K; Maher, Michael M; Blake, Michael A; Lucey, Brian C; Karau, Kelly; Toth, Thomas L; Avinash, Gopal; Halpern, Elkan F; Saini, Sanjay

    2004-09-01

    To assess the effect of noise reduction filters on detection and characterization of lesions on low-radiation-dose abdominal computed tomographic (CT) images. Low-dose CT images of abdominal lesions in 19 consecutive patients (11 women, eight men; age range, 32-78 years) were obtained at reduced tube currents (120-144 mAs). These baseline low-dose CT images were postprocessed with six noise reduction filters; the resulting postprocessed images were then randomly assorted with baseline images. Three radiologists performed independent evaluation of randomized images for presence, number, margins, attenuation, conspicuity, calcification, and enhancement of lesions, as well as image noise. Side-by-side comparison of baseline images with postprocessed images was performed by using a five-point scale for assessing lesion conspicuity and margins, image noise, beam hardening, and diagnostic acceptability. Quantitative noise and contrast-to-noise ratio were obtained for all liver lesions. Statistical analysis was performed by using the Wilcoxon signed rank test, Student t test, and kappa test of agreement. Significant reduction of noise was observed in images postprocessed with filter F compared with the noise in baseline nonfiltered images (P =.004). Although the number of lesions seen on baseline images and that seen on postprocessed images were identical, lesions were less conspicuous on postprocessed images than on baseline images. A decrease in quantitative image noise and contrast-to-noise ratio for liver lesions was noted with all noise reduction filters. There was good interobserver agreement (kappa = 0.7). Although the use of currently available noise reduction filters improves image noise and ameliorates beam-hardening artifacts at low-dose CT, such filters are limited by a compromise in lesion conspicuity and appearance in comparison with lesion conspicuity and appearance on baseline low-dose CT images. Copyright RSNA, 2004

  20. Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry Algorithms Library (CGAL)

    DTIC Science & Technology

    2015-06-01

    10-2014 to 00-11-2014 4. TITLE AND SUBTITLE Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry...ABSTRACT Postprocessing of 3-dimensional (3-D) topologies that are defined as a set of voxels using the Computational Geometry Algorithms Library (CGAL... computational geometry algorithms, several of which are suited to the task. The work flow described in this report involves first defining a set of

  1. Statistical post-processing of seasonal multi-model forecasts: Why is it so hard to beat the multi-model mean?

    NASA Astrophysics Data System (ADS)

    Siegert, Stefan

    2017-04-01

    Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.

  2. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  3. Enhnacing the science of the WFIRST coronagraph instrument with post-processing.

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent; WFIRST CGI data analysis and post-processing WG

    2018-01-01

    We summarize the results of a three years effort investigating how to apply to the WFIRST coronagraph instrument (CGI) modern image analysis methods, now routinely used with ground-based coronagraphs. In this post we quantify the gain associated post-processing for WFIRST-CGI observing scenarios simulated between 2013 and 2017. We also show based one simulations that spectrum of planet can be confidently retrieved using these processing tools with and Integral Field Spectrograph. We then discuss our work using CGI experimental data and quantify coronagraph post-processing testbed gains. We finally introduce stability metrics that are simple to define and measure, and place useful lower bound and upper bounds on the achievable RDI post-processing contrast gain. We show that our bounds hold in the case of the testbed data.

  4. Review of Developments in Electronic, Clinical Data Collection, and Documentation Systems over the Last Decade - Are We Ready for Big Data in Routine Health Care?

    PubMed

    Kessel, Kerstin A; Combs, Stephanie E

    2016-01-01

    Recently, information availability has become more elaborate and widespread, and treatment decisions are based on a multitude of factors, including imaging, molecular or pathological markers, surgical results, and patient's preference. In this context, the term "Big Data" evolved also in health care. The "hype" is heavily discussed in literature. In interdisciplinary medical specialties, such as radiation oncology, not only heterogeneous and voluminous amount of data must be evaluated but also spread in different styles across various information systems. Exactly this problem is also referred to in many ongoing discussions about Big Data - the "three V's": volume, velocity, and variety. We reviewed 895 articles extracted from the NCBI databases about current developments in electronic clinical data management systems and their further analysis or postprocessing procedures. Few articles show first ideas and ways to immediately make use of collected data, particularly imaging data. Many developments can be noticed in the field of clinical trial or analysis documentation, mobile devices for documentation, and genomics research. Using Big Data to advance medical research is definitely on the rise. Health care is perhaps the most comprehensive, important, and economically viable field of application.

  5. Postprocessing classification images

    NASA Technical Reports Server (NTRS)

    Kan, E. P.

    1979-01-01

    Program cleans up remote-sensing maps. It can be used with existing image-processing software. Remapped images closely resemble familiar resource information maps and can replace or supplement classification images not postprocessed by this program.

  6. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  7. Calibration of the physiological equivalent temperature index for three different climatic regions

    NASA Astrophysics Data System (ADS)

    Krüger, E.; Rossi, F.; Drach, P.

    2017-07-01

    In human biometeorology, the integration of several microclimatic variables as a combined index facilitates the understanding of how users perceive thermal environments. Indices, such as the physiological equivalent temperature (PET) index, translate the combined effects of meteorological variables on humans in terms of thermal stress or comfort and serve as important aids to climate-responsive urban and regional planning as well as heat stress and thermal comfort analyses. However, there is a need for adjusting proposed comfort/stress ranges of a given index when using it in different climatic contexts. The purpose of this study is to present a preliminary calibration procedure for the PET index for three different climatic regions: Curitiba, Brazil, a subtropical location; Rio de Janeiro, Brazil, a tropical city; and Glasgow, UK, a high-latitude location. Field studies have been carried out by the authors according to a similar protocol and using similar equipment, yielding actual thermal sensation votes and microclimate data, post-processed as PET data. The calibration procedure uses exclusively thermal sensation data as reported by pedestrians during outdoor comfort campaigns and concurrent microclimatic data recorded during the interviews. PET comfort/stress classes differ among the three locations and, in general, are less restrictive as in the original ranges proposed by the index developers.

  8. Paraxial diffractive elements for space-variant linear transforms

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan

    1998-06-01

    Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.

  9. Multiwavelength mock observations of the WHIM in a simulated galaxy cluster

    NASA Astrophysics Data System (ADS)

    Planelles, Susana; Mimica, Petar; Quilis, Vicent; Cuesta-Martínez, Carlos

    2018-06-01

    About half of the expected total baryon budget in the local Universe is `missing'. Hydrodynamical simulations suggest that most of the missing baryons are located in a mildly overdense, warm-hot intergalactic medium (WHIM), which is difficult to be detected at most wavelengths. In this paper, we explore multiwavelength synthetic observations of a massive galaxy cluster developed in a full Eulerian-adaptive mesh refinement cosmological simulation. A novel numerical procedure is applied on the outputs of the simulation, which are post-processed with a full-radiative transfer code that can compute the change of the intensity at any frequency along the null geodesic of photons. We compare the emission from the whole intergalactic medium and from the WHIM component (defined as the gas with a temperature in the range 105-107 K) at three observational bands associated with thermal X-rays, thermal and kinematic Sunyaev-Zel'dovich effect, and radio emission. The synthetic maps produced by this procedure could be directly compared with existing observational maps and could be used as a guide for future observations with forthcoming instruments. The analysis of the different emissions associated with a high-resolution galaxy cluster is in broad agreement with previous simulated and observational estimates of both gas components.

  10. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  11. Analysis-Software for Hyperspectral Algal Reflectance Probes v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timlin, Jerilyn A.; Reichardt, Thomas A.; Jenson, Travis J.

    This software provides onsite analysis of the hyperspectral reflectance data acquired on an outdoor algal pond by a multichannel, fiber-coupled spectroradiometer. The analysis algorithm is based on numerical inversion of a reflectance model, in which the above-water reflectance is expressed as a function of the single backscattering albedo, which is dependent on the backscatter and absorption coefficients of the algal culture, which are in turn related to the algal biomass and pigment optical activity, respectively. Prior to the development of this software, while raw multichannel data were displayed in real time, analysis required a post-processing procedure to extract the relevantmore » parameters. This software provides the capability to track the temporal variation of such culture parameters in real time, as raw data are being acquired, or can be run in a post processing mode. The software allows the user to select between different algal species, incorporate the appropriate calibration data, and observe the quality of the resulting model inversions.« less

  12. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  13. Feasibility and availability of ⁶⁸Ga-labelled peptides.

    PubMed

    Decristoforo, Clemens; Pickett, Roger D; Verbruggen, Alfons

    2012-02-01

    (68)Ga has attracted tremendous interest as a radionuclide for PET based on its suitable half-life of 68 min, high positron emission yield and ready availability from (68)Ge/(68)Ga generators, making it independent of cyclotron production. (68)Ga-labelled DOTA-conjugated somatostatin analogues, including DOTA-TOC, DOTA-TATE and DOTA-NOC, have driven the development of technologies to provide such radiopharmaceuticals for clinical applications mainly in the diagnosis of somatostatin receptor-expressing tumours. We summarize the issues determining the feasibility and availability of (68)Ga-labelled peptides, including generator technology, (68)Ga generator eluate postprocessing methods, radiolabelling, automation and peptide developments, and also quality assurance and regulatory aspects. (68)Ge/(68)Ga generators based on SnO(2), TiO(2) or organic matrices are today routinely supplied to nuclear medicine departments, and a variety of automated systems for postprocessing and radiolabelling have been developed. New developments include improved chelators for (68)Ga that could open new ways to utilize this technology. Challenges and limitations in the on-site preparation and use of (68)Ga-labelled peptides outside the marketing authorization track are also discussed.

  14. Scripting MODFLOW Model Development Using Python and FloPy.

    PubMed

    Bakker, M; Post, V; Langevin, C D; Hughes, J D; White, J T; Starn, J J; Fienen, M N

    2016-09-01

    Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy. © 2016, National Ground Water Association.

  15. 4D-SFM Photogrammetry for Monitoring Sediment Dynamics in a Debris-Flow Catchment: Software Testing and Results Comparison

    NASA Astrophysics Data System (ADS)

    Cucchiaro, S.; Maset, E.; Fusiello, A.; Cazorzi, F.

    2018-05-01

    In recent years, the combination of Structure-from-Motion (SfM) algorithms and UAV-based aerial images has revolutionised 3D topographic surveys for natural environment monitoring, offering low-cost, fast and high quality data acquisition and processing. A continuous monitoring of the morphological changes through multi-temporal (4D) SfM surveys allows, e.g., to analyse the torrent dynamic also in complex topography environment like debris-flow catchments, provided that appropriate tools and procedures are employed in the data processing steps. In this work we test two different software packages (3DF Zephyr Aerial and Agisoft Photoscan) on a dataset composed of both UAV and terrestrial images acquired on a debris-flow reach (Moscardo torrent - North-eastern Italian Alps). Unlike other papers in the literature, we evaluate the results not only on the raw point clouds generated by the Structure-from- Motion and Multi-View Stereo algorithms, but also on the Digital Terrain Models (DTMs) created after post-processing. Outcomes show differences between the DTMs that can be considered irrelevant for the geomorphological phenomena under analysis. This study confirms that SfM photogrammetry can be a valuable tool for monitoring sediment dynamics, but accurate point cloud post-processing is required to reliably localize geomorphological changes.

  16. Finger Vein Segmentation from Infrared Images Based on a Modified Separable Mumford Shah Model and Local Entropy Thresholding

    PubMed Central

    Dermatas, Evangelos

    2015-01-01

    A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern. PMID:26120357

  17. Fuzzy Filtering Method for Color Videos Corrupted by Additive Noise

    PubMed Central

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Nino-de-Rivera, Luis

    2014-01-01

    A novel method for the denoising of color videos corrupted by additive noise is presented in this paper. The proposed technique consists of three principal filtering steps: spatial, spatiotemporal, and spatial postprocessing. In contrast to other state-of-the-art algorithms, during the first spatial step, the eight gradient values in different directions for pixels located in the vicinity of a central pixel as well as the R, G, and B channel correlation between the analogous pixels in different color bands are taken into account. These gradient values give the information about the level of contamination then the designed fuzzy rules are used to preserve the image features (textures, edges, sharpness, chromatic properties, etc.). In the second step, two neighboring video frames are processed together. Possible local motions between neighboring frames are estimated using block matching procedure in eight directions to perform interframe filtering. In the final step, the edges and smoothed regions in a current frame are distinguished for final postprocessing filtering. Numerous simulation results confirm that this novel 3D fuzzy method performs better than other state-of-the-art techniques in terms of objective criteria (PSNR, MAE, NCD, and SSIM) as well as subjective perception via the human vision system in the different color videos. PMID:24688428

  18. Raman spectral post-processing for oral tissue discrimination – a step for an automatized diagnostic system

    PubMed Central

    Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.

    2017-01-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115

  19. Raman spectral post-processing for oral tissue discrimination - a step for an automatized diagnostic system.

    PubMed

    Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A

    2017-11-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.

  20. Post-processing of seismic parameter data based on valid seismic event determination

    DOEpatents

    McEvilly, Thomas V.

    1985-01-01

    An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.

  1. Adaptive target binarization method based on a dual-camera system

    NASA Astrophysics Data System (ADS)

    Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing

    2018-01-01

    An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.

  2. Front-surface fabrication of moderate aspect ratio micro-channels in fused silica by single picosecond Gaussian-Bessel laser pulse

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Sanner, Nicolas; Sentis, Marc; Stoian, Razvan; Zhao, Wei; Cheng, Guanghua; Utéza, Olivier

    2018-02-01

    Single-shot Gaussian-Bessel laser beams of 1 ps pulse duration and of 0.9 μm core size and 60 μm depth of focus are used for drilling micro-channels on front side of fused silica in ambient condition. Channels ablated at different pulse energies are fully characterized by AFM and post-processing polishing procedures. We identify experimental energy conditions (typically 1.5 µJ) suitable to fabricate non-tapered channels with mean diameter of 1.2 µm and length of 40 μm while maintaining an utmost quality of the front opening of the channels. In addition, by further applying accurate post-polishing procedure, channels with high surface quality and moderate aspect ratio down to a few units are accessible, which would find interest in the surface micro-structuring of materials, with perspective of further scalability to meta-material specifications.

  3. Proposals for best-quality immunohistochemical staining of paraffin-embedded brain tissue slides in forensics.

    PubMed

    Trautz, Florian; Dreßler, Jan; Stassart, Ruth; Müller, Wolf; Ondruschka, Benjamin

    2018-01-03

    Immunohistochemistry (IHC) has become an integral part in forensic histopathology over the last decades. However, the underlying methods for IHC vary greatly depending on the institution, creating a lack of comparability. The aim of this study was to assess the optimal approach for different technical aspects of IHC, in order to improve and standardize this procedure. Therefore, qualitative results from manual and automatic IHC staining of brain samples were compared, as well as potential differences in suitability of common IHC glass slides. Further, possibilities of image digitalization and connected issues were investigated. In our study, automatic staining showed more consistent staining results, compared to manual staining procedures. Digitalization and digital post-processing facilitated direct analysis and analysis for reproducibility considerably. No differences were found for different commercially available microscopic glass slides regarding suitability of IHC brain researches, but a certain rate of tissue loss should be expected during the staining process.

  4. Simultaneous optimization method for absorption spectroscopy postprocessing.

    PubMed

    Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T

    2015-05-10

    A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.

  5. Post-processing of metal matrix composites by friction stir processing

    NASA Astrophysics Data System (ADS)

    Sharma, Vipin; Singla, Yogesh; Gupta, Yashpal; Raghuwanshi, Jitendra

    2018-05-01

    In metal matrix composites non-uniform distribution of reinforcement particles resulted in adverse affect on the mechanical properties. It is of great interest to explore post-processing techniques that can eliminate particle distribution heterogeneity. Friction stir processing is a relatively newer technique used for post-processing of metal matrix composites to improve homogeneity in particles distribution. In friction stir processing, synergistic effect of stirring, extrusion and forging resulted in refinement of grains, reduction of reinforcement particles size, uniformity in particles distribution, reduction in microstructural heterogeneity and elimination of defects.

  6. Quantum-key-distribution protocol with pseudorandom bases

    NASA Astrophysics Data System (ADS)

    Trushechkin, A. S.; Tregubov, P. A.; Kiktenko, E. O.; Kurochkin, Y. V.; Fedorov, A. K.

    2018-01-01

    Quantum key distribution (QKD) offers a way for establishing information-theoretical secure communications. An important part of QKD technology is a high-quality random number generator for the quantum-state preparation and for post-processing procedures. In this work, we consider a class of prepare-and-measure QKD protocols, utilizing additional pseudorandomness in the preparation of quantum states. We study one of such protocols and analyze its security against the intercept-resend attack. We demonstrate that, for single-photon sources, the considered protocol gives better secret key rates than the BB84 and the asymmetric BB84 protocols. However, the protocol strongly requires single-photon sources.

  7. Improving LiDAR Data Post-Processing Techniques for Archaeological Site Management and Analysis: A Case Study from Canaveral National Seashore Park

    NASA Astrophysics Data System (ADS)

    Griesbach, Christopher

    Methods used to process raw Light Detection and Ranging (LiDAR) data can sometimes obscure the digital signatures indicative of an archaeological site. This thesis explains the negative effects that certain LiDAR data processing procedures can have on the preservation of an archaeological site. This thesis also presents methods for effectively integrating LiDAR with other forms of mapping data in a Geographic Information Systems (GIS) environment in order to improve LiDAR archaeological signatures by examining several pre-Columbian Native American shell middens located in Canaveral National Seashore Park (CANA).

  8. Post-processing of global model output to forecast point rainfall

    NASA Astrophysics Data System (ADS)

    Hewson, Tim; Pillosu, Fatima

    2016-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts) has recently embarked upon a new project to post-process gridbox rainfall forecasts from its ensemble prediction system, to provide probabilistic forecasts of point rainfall. The new post-processing strategy relies on understanding how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. We use a number of simple global model parameters, such as the convective rainfall fraction, to anticipate the sub-grid variability, and then post-process each ensemble forecast into a pdf (probability density function) for a point-rainfall total. The final forecast will comprise the sum of the different pdfs from all ensemble members. The post-processing is essentially a re-calibration exercise, which needs only rainfall totals from standard global reporting stations (and forecasts) to train it. High density observations are not needed. This presentation will describe results from the initial 'proof of concept' study, which has been remarkably successful. Reference will also be made to other useful outcomes of the work, such as gaining insights into systematic model biases in different synoptic settings. The special case of orographic rainfall will also be discussed. Work ongoing this year will also be described. This involves further investigations of which model parameters can provide predictive skill, and will then move on to development of an operational system for predicting point rainfall across the globe. The main practical benefit of this system will be a greatly improved capacity to predict extreme point rainfall, and thereby provide early warnings, for the whole world, of flash flood potential for lead times that extend beyond day 5. This will be incorporated into the suite of products output by GLOFAS (the GLObal Flood Awareness System) which is hosted at ECMWF. As such this work offers a very cost-effective approach to satisfying user needs right around the world. This field has hitherto relied on using very expensive high-resolution ensembles; by their very nature these can only run over small regions, and only for lead times up to about 2 days.

  9. Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images

    PubMed Central

    Veta, Mitko; van Diest, Paul J.; Kornegoor, Robert; Huisman, André; Viergever, Max A.; Pluim, Josien P. W.

    2013-01-01

    The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. PMID:23922958

  10. Automatic nuclei segmentation in H&E stained breast cancer histopathology images.

    PubMed

    Veta, Mitko; van Diest, Paul J; Kornegoor, Robert; Huisman, André; Viergever, Max A; Pluim, Josien P W

    2013-01-01

    The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.

  11. Determining optimal clothing ensembles based on weather forecasts, with particular reference to outdoor winter military activities.

    PubMed

    Morabito, Marco; Pavlinic, Daniela Z; Crisci, Alfonso; Capecchi, Valerio; Orlandini, Simone; Mekjavic, Igor B

    2011-07-01

    Military and civil defense personnel are often involved in complex activities in a variety of outdoor environments. The choice of appropriate clothing ensembles represents an important strategy to establish the success of a military mission. The main aim of this study was to compare the known clothing insulation of the garment ensembles worn by soldiers during two winter outdoor field trials (hike and guard duty) with the estimated optimal clothing thermal insulations recommended to maintain thermoneutrality, assessed by using two different biometeorological procedures. The overall aim was to assess the applicability of such biometeorological procedures to weather forecast systems, thereby developing a comprehensive biometeorological tool for military operational forecast purposes. Military trials were carried out during winter 2006 in Pokljuka (Slovenia) by Slovene Armed Forces personnel. Gastrointestinal temperature, heart rate and environmental parameters were measured with portable data acquisition systems. The thermal characteristics of the clothing ensembles worn by the soldiers, namely thermal resistance, were determined with a sweating thermal manikin. Results showed that the clothing ensemble worn by the military was appropriate during guard duty but generally inappropriate during the hike. A general under-estimation of the biometeorological forecast model in predicting the optimal clothing insulation value was observed and an additional post-processing calibration might further improve forecast accuracy. This study represents the first step in the development of a comprehensive personalized biometeorological forecast system aimed at improving recommendations regarding the optimal thermal insulation of military garment ensembles for winter activities.

  12. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  13. Opportunities of CMOS-MEMS integration through LSI foundry and open facility

    NASA Astrophysics Data System (ADS)

    Mita, Yoshio; Lebrasseur, Eric; Okamoto, Yuki; Marty, Frédéfic; Setoguchi, Ryota; Yamada, Kentaro; Mori, Isao; Morishita, Satoshi; Imai, Yoshiaki; Hosaka, Kota; Hirakawa, Atsushi; Inoue, Shu; Kubota, Masanori; Denoual, Matthieu

    2017-06-01

    Since the 2000s, several countries have established micro- and nanofabrication platforms for the research and education community as national projects. By combining such platforms with VLSI multichip foundry services, various integrated devices, referred to as “CMOS-MEMS”, can be realized without constructing an entire cleanroom. In this paper, we summarize MEMS-last postprocess schemes for CMOS devices on a bulk silicon wafer as well as on a silicon-on-insulator (SOI) wafer using an open-access cleanroom of the Nanotechnology Platform of MEXT Japan. The integration devices presented in this article are free-standing structures and postprocess isolated LSI devices. Postprocess issues are identified with their solutions, such as the reactive ion etching (RIE) lag for dry release and the impact of the deep RIE (DRIE) postprocess on transistor characteristics. Integration with nonsilicon materials is proposed as one of the future directions.

  14. Dynamic mechanical properties of hydroxyapatite/polyethylene oxide nanocomposites: characterizing isotropic and post-processing microstructures

    NASA Astrophysics Data System (ADS)

    Shofner, Meisha; Lee, Ji Hoon

    2012-02-01

    Compatible component interfaces in polymer nanocomposites can be used to facilitate a dispersed morphology and improved physical properties as has been shown extensively in experimental results concerning amorphous matrix nanocomposites. In this research, a block copolymer compatibilized interface is employed in a semi-crystalline matrix to prevent large scale nanoparticle clustering and enable microstructure construction with post-processing drawing. The specific materials used are hydroxyapatite nanoparticles coated with a polyethylene oxide-b-polymethacrylic acid block copolymer and a polyethylene oxide matrix. Two particle shapes are used: spherical and needle-shaped. Characterization of the dynamic mechanical properties indicated that the two nanoparticle systems provided similar levels of reinforcement to the matrix. For the needle-shaped nanoparticles, the post-processing step increased matrix crystallinity and changed the thermomechanical reinforcement trends. These results will be used to further refine the post-processing parameters to achieve a nanocomposite microstructure with triangulated arrays of nanoparticles.

  15. Improved quantification for local regions of interest in preclinical PET imaging

    NASA Astrophysics Data System (ADS)

    Cal-González, J.; Moore, S. C.; Park, M.-A.; Herraiz, J. L.; Vaquero, J. J.; Desco, M.; Udias, J. M.

    2015-09-01

    In Positron Emission Tomography, there are several causes of quantitative inaccuracy, such as partial volume or spillover effects. The impact of these effects is greater when using radionuclides that have a large positron range, e.g. 68Ga and 124I, which have been increasingly used in the clinic. We have implemented and evaluated a local projection algorithm (LPA), originally evaluated for SPECT, to compensate for both partial-volume and spillover effects in PET. This method is based on the use of a high-resolution CT or MR image, co-registered with a PET image, which permits a high-resolution segmentation of a few tissues within a volume of interest (VOI) centered on a region within which tissue-activity values need to be estimated. The additional boundary information is used to obtain improved activity estimates for each tissue within the VOI, by solving a simple inversion problem. We implemented this algorithm for the preclinical Argus PET/CT scanner and assessed its performance using the radionuclides 18F, 68Ga and 124I. We also evaluated and compared the results obtained when it was applied during the iterative reconstruction, as well as after the reconstruction as a postprocessing procedure. In addition, we studied how LPA can help to reduce the ‘spillover contamination’, which causes inaccurate quantification of lesions in the immediate neighborhood of large, ‘hot’ sources. Quantification was significantly improved by using LPA, which provided more accurate ratios of lesion-to-background activity concentration for hot and cold regions. For 18F, the contrast was improved from 3.0 to 4.0 in hot lesions (when the true ratio was 4.0) and from 0.16 to 0.06 in cold lesions (true ratio  =  0.0), when using the LPA postprocessing. Furthermore, activity values estimated within the VOI using LPA during reconstruction were slightly more accurate than those obtained by post-processing, while also visually improving the image contrast and uniformity within the VOI.

  16. Improved quantification for local regions of interest in preclinical PET imaging

    PubMed Central

    Cal-González, J.; Moore, S. C.; Park, M.-A.; Herraiz, J. L.; Vaquero, J. J.; Desco, M.; Udias, J. M.

    2015-01-01

    In Positron Emission Tomography, there are several causes of quantitative inaccuracy, such as partial volume or spillover effects. The impact of these effects is greater when using radionuclides that have a large positron range, e.g., 68Ga and 124I, which have been increasingly used in the clinic. We have implemented and evaluated a local projection algorithm (LPA), originally evaluated for SPECT, to compensate for both partial-volume and spillover effects in PET. This method is based on the use of a high-resolution CT or MR image, co-registered with a PET image, which permits a high-resolution segmentation of a few tissues within a volume of interest (VOI) centered on a region within which tissue-activity values need to be estimated. The additional boundary information is used to obtain improved activity estimates for each tissue within the VOI, by solving a simple inversion problem. We implemented this algorithm for the preclinical Argus PET/CT scanner and assessed its performance using the radionuclides 18F, 68Ga and 124I. We also evaluated and compared the results obtained when it was applied during the iterative reconstruction, as well as after the reconstruction as a postprocessing procedure. In addition, we studied how LPA can help to reduce the “spillover contamination”, which causes inaccurate quantification of lesions in the immediate neighborhood of large, “hot” sources. Quantification was significantly improved by using LPA, which provided more accurate ratios of lesion-to-background activity concentration for hot and cold regions. For 18F, the contrast was improved from 3.0 to 4.0 in hot lesions (when the true ratio was 4.0) and from 0.16 to 0.06 in cold lesions (true ratio = 0.0), when using the LPA postprocessing. Furthermore, activity values estimated within the VOI using LPA during reconstruction were slightly more accurate than those obtained by post-processing, while also visually improving the image contrast and uniformity within the VOI. PMID:26334312

  17. A Post-Processing Receiver for the Lunar Laser Communications Demonstration Project

    NASA Technical Reports Server (NTRS)

    Srinivasan, Meera; Birnbaum, Kevin; Cheng, Michael; Quirk, Kevin

    2013-01-01

    The Lunar Laser Communications Demonstration Project undertaken by MIT Lincoln Laboratory and NASA's Goddard Space Flight Center will demonstrate high-rate laser communications from lunar orbit to the Earth. NASA's Jet Propulsion Laboratory is developing a backup ground station supporting a data rate of 39 Mbps that is based on a non-real-time software post-processing receiver architecture. This approach entails processing sample-rate-limited data without feedback in the presence high uncertainty in downlink clock characteristics under low signal flux conditions. In this paper we present a receiver concept that addresses these challenges with descriptions of the photodetector assembly, sample acquisition and recording platform, and signal processing approach. End-to-end coded simulation and laboratory data analysis results are presented that validate the receiver conceptual design.

  18. A comparison of ensemble post-processing approaches that preserve correlation structures

    NASA Astrophysics Data System (ADS)

    Schefzik, Roman; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    Despite the fact that ensemble forecasts address the major sources of uncertainty, they exhibit biases and dispersion errors and therefore are known to improve by calibration or statistical post-processing. For instance the ensemble model output statistics (EMOS) method, also known as non-homogeneous regression approach (Gneiting et al., 2005) is known to strongly improve forecast skill. EMOS is based on fitting and adjusting a parametric probability density function (PDF). However, EMOS and other common post-processing approaches apply to a single weather quantity at a single location for a single look-ahead time. They are therefore unable of taking into account spatial, inter-variable and temporal dependence structures. Recently many research efforts have been invested in designing post-processing methods that resolve this drawback but also in verification methods that enable the detection of dependence structures. New verification methods are applied on two classes of post-processing methods, both generating physically coherent ensembles. A first class uses the ensemble copula coupling (ECC) that starts from EMOS but adjusts the rank structure (Schefzik et al., 2013). The second class is a member-by-member post-processing (MBM) approach that maps each raw ensemble member to a corrected one (Van Schaeybroeck and Vannitsem, 2015). We compare variants of the EMOS-ECC and MBM classes and highlight a specific theoretical connection between them. All post-processing variants are applied in the context of the ensemble system of the European Centre of Weather Forecasts (ECMWF) and compared using multivariate verification tools including the energy score, the variogram score (Scheuerer and Hamill, 2015) and the band depth rank histogram (Thorarinsdottir et al., 2015). Gneiting, Raftery, Westveld, and Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Wea. Rev., {133}, 1098-1118. Scheuerer and Hamill, 2015. Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Mon. Wea. Rev. {143},1321-1334. Schefzik, Thorarinsdottir, Gneiting. Uncertainty quantification in complex simulation models using ensemble copula coupling. Statistical Science {28},616-640, 2013. Thorarinsdottir, M. Scheuerer, and C. Heinz, 2015. Assessing the calibration of high-dimensional ensemble forecasts using rank histograms, arXiv:1310.0236. Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  19. Scalable Microfabrication Procedures for Adhesive-Integrated Flexible and Stretchable Electronic Sensors.

    PubMed

    Kang, Dae Y; Kim, Yun-Soung; Ornelas, Gladys; Sinha, Mridu; Naidu, Keerthiga; Coleman, Todd P

    2015-09-16

    New classes of ultrathin flexible and stretchable devices have changed the way modern electronics are designed to interact with their target systems. Though more and more novel technologies surface and steer the way we think about future electronics, there exists an unmet need in regards to optimizing the fabrication procedures for these devices so that large-scale industrial translation is realistic. This article presents an unconventional approach for facile microfabrication and processing of adhesive-peeled (AP) flexible sensors. By assembling AP sensors on a weakly-adhering substrate in an inverted fashion, we demonstrate a procedure with 50% reduced end-to-end processing time that achieves greater levels of fabrication yield. The methodology is used to demonstrate the fabrication of electrical and mechanical flexible and stretchable AP sensors that are peeled-off their carrier substrates by consumer adhesives. In using this approach, we outline the manner by which adhesion is maintained and buckling is reduced for gold film processing on polydimethylsiloxane substrates. In addition, we demonstrate the compatibility of our methodology with large-scale post-processing using a roll-to-roll approach.

  20. Cryo-balloon catheter localization in fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Kurzendorfer, Tanja; Brost, Alexander; Jakob, Carolin; Mewes, Philip W.; Bourier, Felix; Koch, Martin; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2013-03-01

    Minimally invasive catheter ablation has become the preferred treatment option for atrial fibrillation. Although the standard ablation procedure involves ablation points set by radio-frequency catheters, cryo-balloon catheters have even been reported to be more advantageous in certain cases. As electro-anatomical mapping systems do not support cryo-balloon ablation procedures, X-ray guidance is needed. However, current methods to provide support for cryo-balloon catheters in fluoroscopically guided ablation procedures rely heavily on manual user interaction. To improve this, we propose a first method for automatic cryo-balloon catheter localization in fluoroscopic images based on a blob detection algorithm. Our method is evaluated on 24 clinical images from 17 patients. The method successfully detected the cryoballoon in 22 out of 24 images, yielding a success rate of 91.6 %. The successful localization achieved an accuracy of 1.00 mm +/- 0.44 mm. Even though our methods currently fails in 8.4 % of the images available, it still offers a significant improvement over manual methods. Furthermore, detecting a landmark point along the cryo-balloon catheter can be a very important step for additional post-processing operations.

  1. Scalable Microfabrication Procedures for Adhesive-Integrated Flexible and Stretchable Electronic Sensors

    PubMed Central

    Kang, Dae Y.; Kim, Yun-Soung; Ornelas, Gladys; Sinha, Mridu; Naidu, Keerthiga; Coleman, Todd P.

    2015-01-01

    New classes of ultrathin flexible and stretchable devices have changed the way modern electronics are designed to interact with their target systems. Though more and more novel technologies surface and steer the way we think about future electronics, there exists an unmet need in regards to optimizing the fabrication procedures for these devices so that large-scale industrial translation is realistic. This article presents an unconventional approach for facile microfabrication and processing of adhesive-peeled (AP) flexible sensors. By assembling AP sensors on a weakly-adhering substrate in an inverted fashion, we demonstrate a procedure with 50% reduced end-to-end processing time that achieves greater levels of fabrication yield. The methodology is used to demonstrate the fabrication of electrical and mechanical flexible and stretchable AP sensors that are peeled-off their carrier substrates by consumer adhesives. In using this approach, we outline the manner by which adhesion is maintained and buckling is reduced for gold film processing on polydimethylsiloxane substrates. In addition, we demonstrate the compatibility of our methodology with large-scale post-processing using a roll-to-roll approach. PMID:26389915

  2. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  3. Improving medium-range ensemble streamflow forecasts through statistical post-processing

    NASA Astrophysics Data System (ADS)

    Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.

  4. MITK global tractography

    NASA Astrophysics Data System (ADS)

    Neher, Peter F.; Stieltjes, Bram; Reisert, Marco; Reicht, Ignaz; Meinzer, Hans-Peter; Fritzsche, Klaus H.

    2012-02-01

    Fiber tracking algorithms yield valuable information for neurosurgery as well as automated diagnostic approaches. However, they have not yet arrived in the daily clinical practice. In this paper we present an open source integration of the global tractography algorithm proposed by Reisert et.al.1 into the open source Medical Imaging Interaction Toolkit (MITK) developed and maintained by the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ). The integration of this algorithm into a standardized and open development environment like MITK enriches accessibility of tractography algorithms for the science community and is an important step towards bringing neuronal tractography closer to a clinical application. The MITK diffusion imaging application, downloadable from www.mitk.org, combines all the steps necessary for a successful tractography: preprocessing, reconstruction of the images, the actual tracking, live monitoring of intermediate results, postprocessing and visualization of the final tracking results. This paper presents typical tracking results and demonstrates the steps for pre- and post-processing of the images.

  5. Close coupling of pre- and post-processing vision stations using inexact algorithms

    NASA Astrophysics Data System (ADS)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  6. Postprocessing techniques for 3D non-linear structures

    NASA Technical Reports Server (NTRS)

    Gallagher, Richard S.

    1987-01-01

    How graphics postprocessing techniques are currently used to examine the results of 3-D nonlinear analyses, some new techniques which take advantage of recent technology, and how these results relate to both the finite element model and its geometric parent are reviewed.

  7. Mechanical behavior of post-processed Inconel 718 manufactured through the electron beam melting process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirka, Michael M.; Medina, Frank; Dehoff, Ryan R.

    Here, the electron beam melting (EBM) process was used to fabricate Inconel 718. The microstructure and tensile properties were characterized in both the as-fabricated and post-processed state transverse (T-orientation) and longitudinal (L-orientation) to the build direction. Post-processing involved both a hot isostatic pressing (HIP) and solution treatment and aging (STA) to homogenize the microstructure. In the as-fabricated state, EBM Inconel 718 exhibits a spatially dependent microstructure that is a function of build height. Spanning the last few layers is a cored dendritic structure comprised of the products (carbides and Laves phase) predicted under equilibrium solidification conditions. With increasing distance frommore » the build's top surface, the cored dendritic structure becomes increasingly homogeneous with complete dissolution of the secondary dendrite arms. Further, temporal phase kinetics are observed to lead to the dissolution of the strengthening γ"γ" and precipitation of networks of fine δ needles that span the grains. Microstructurally, post-processing resulted in dissolution of the δ networks and homogeneous precipitation of γ'"γ'" throughout the height of the build. In the as-fabricated state, the monotonic tensile behavior exhibits a height sensitivity within the T-orientation at both 20 and 650 °C. Along the L-orientation, the tensile behavior exhibits strength values comparable to the reference wrought material in the fully heat-treated state. After post-processing, the yield strength, ultimate strength, and elongation at failure for the EBM Inconel 718 were observed to have beneficially increased compared to the as-fabricated material. Further, as a result of post-processing the spatial variance of the ultimate yield strength and elongation at failure within the transverse direction decreased by 4 and 3× respectively.« less

  8. Mechanical behavior of post-processed Inconel 718 manufactured through the electron beam melting process

    DOE PAGES

    Kirka, Michael M.; Medina, Frank; Dehoff, Ryan R.; ...

    2016-10-21

    Here, the electron beam melting (EBM) process was used to fabricate Inconel 718. The microstructure and tensile properties were characterized in both the as-fabricated and post-processed state transverse (T-orientation) and longitudinal (L-orientation) to the build direction. Post-processing involved both a hot isostatic pressing (HIP) and solution treatment and aging (STA) to homogenize the microstructure. In the as-fabricated state, EBM Inconel 718 exhibits a spatially dependent microstructure that is a function of build height. Spanning the last few layers is a cored dendritic structure comprised of the products (carbides and Laves phase) predicted under equilibrium solidification conditions. With increasing distance frommore » the build's top surface, the cored dendritic structure becomes increasingly homogeneous with complete dissolution of the secondary dendrite arms. Further, temporal phase kinetics are observed to lead to the dissolution of the strengthening γ"γ" and precipitation of networks of fine δ needles that span the grains. Microstructurally, post-processing resulted in dissolution of the δ networks and homogeneous precipitation of γ'"γ'" throughout the height of the build. In the as-fabricated state, the monotonic tensile behavior exhibits a height sensitivity within the T-orientation at both 20 and 650 °C. Along the L-orientation, the tensile behavior exhibits strength values comparable to the reference wrought material in the fully heat-treated state. After post-processing, the yield strength, ultimate strength, and elongation at failure for the EBM Inconel 718 were observed to have beneficially increased compared to the as-fabricated material. Further, as a result of post-processing the spatial variance of the ultimate yield strength and elongation at failure within the transverse direction decreased by 4 and 3× respectively.« less

  9. Design and Use of Microphone Directional Arrays for Aeroacoustic Measurements

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Brooks, Thomas F.; Hunter, William W., Jr.; Meadows, Kristine R.

    1998-01-01

    An overview of the development of two microphone directional arrays for aeroacoustic testing is presented. These arrays were specifically developed to measure airframe noise in the NASA Langley Quiet Flow Facility. A large aperture directional array using 35 flush-mounted microphones was constructed to obtain high resolution noise localization maps around airframe models. This array possesses a maximum diagonal aperture size of 34 inches. A unique logarithmic spiral layout design was chosen for the targeted frequency range of 2-30 kHz. Complementing the large array is a small aperture directional array, constructed to obtain spectra and directivity information from regions on the model. This array, possessing 33 microphones with a maximum diagonal aperture size of 7.76 inches, is easily moved about the model in elevation and azimuth. Custom microphone shading algorithms have been developed to provide a frequency- and position-invariant sensing area from 10-40 kHz with an overall targeted frequency range for the array of 5-60 kHz. Both arrays are employed in acoustic measurements of a 6 percent of full scale airframe model consisting of a main element NACA 632-215 wing section with a 30 percent chord half-span flap. Representative data obtained from these measurements is presented, along with details of the array calibration and data post-processing procedures.

  10. Analyses in Support of Risk-Informed Natural Gas Vehicle Maintenance Facility Codes and Standards: Phase II.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaylock, Myra L.; LaFleur, Chris Bensdotter; Muna, Alice Baca

    Safety standards development for maintenance facilities of liquid and compressed natural gas fueled vehicles is required to ensure proper facility design and operating procedures. Standard development organizations are utilizing risk-informed concepts to develop natural gas vehicle (NGV) codes and standards so that maintenance facilities meet acceptable risk levels. The present report summarizes Phase II work for existing NGV repair facility code requirements and highlights inconsistencies that need quantitative analysis into their effectiveness. A Hazardous and Operability study was performed to identify key scenarios of interest using risk ranking. Detailed simulations and modeling were performed to estimate the location and behaviormore » of natural gas releases based on these scenarios. Specific code conflicts were identified, and ineffective code requirements were highlighted and resolutions proposed. These include ventilation rate basis on area or volume, as well as a ceiling offset which seems ineffective at protecting against flammable gas concentrations. ACKNOWLEDGEMENTS The authors gratefully acknowledge Bill Houf (SNL -- Retired) for his assistance with the set-up and post-processing of the numerical simulations. The authors also acknowledge Doug Horne (retired) for his helpful discussions. We would also like to acknowledge the support from the Clean Cities program of DOE's Vehicle Technology Office.« less

  11. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  12. Laser post-processing of halide perovskites for enhanced photoluminescence and absorbance

    NASA Astrophysics Data System (ADS)

    Tiguntseva, E. Y.; Saraeva, I. N.; Kudryashov, S. I.; Ushakova, E. V.; Komissarenko, F. E.; Ishteev, A. R.; Tsypkin, A. N.; Haroldson, R.; Milichko, V. A.; Zuev, D. A.; Makarov, S. V.; Zakhidov, A. A.

    2017-11-01

    Hybrid halide perovskites have emerged as one of the most promising type of materials for thin-film photovoltaic and light-emitting devices. Further boosting their performance is critically important for commercialization. Here we use femtosecond laser for post-processing of organo-metalic perovskite (MAPbI3) films. The high throughput laser approaches include both ablative silicon nanoparticles integration and laser-induced annealing. By using these techniques, we achieve strong enhancement of photoluminescence as well as useful light absorption. As a result, we observed experimentally 10-fold enhancement of absorbance in a perovskite layer with the silicon nanoparticles. Direct laser annealing allows for increasing of photoluminescence over 130%, and increase absorbance over 300% in near-IR range. We believe that the developed approaches pave the way to novel scalable and highly effective designs of perovskite based devices.

  13. A post-processing system for automated rectification and registration of spaceborne SAR imagery

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Kwok, Ronald; Pang, Shirley S.

    1987-01-01

    An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.

  14. Quantifying model uncertainty in seasonal Arctic sea-ice forecasts

    NASA Astrophysics Data System (ADS)

    Blanchard-Wrigglesworth, Edward; Barthélemy, Antoine; Chevallier, Matthieu; Cullather, Richard; Fučkar, Neven; Massonnet, François; Posey, Pamela; Wang, Wanqiu; Zhang, Jinlun; Ardilouze, Constantin; Bitz, Cecilia; Vernieres, Guillaume; Wallcraft, Alan; Wang, Muyin

    2017-04-01

    Dynamical model forecasts in the Sea Ice Outlook (SIO) of September Arctic sea-ice extent over the last decade have shown lower skill than that found in both idealized model experiments and hindcasts of previous decades. Additionally, it is unclear how different model physics, initial conditions or post-processing techniques contribute to SIO forecast uncertainty. In this work, we have produced a seasonal forecast of 2015 Arctic summer sea ice using SIO dynamical models initialized with identical sea-ice thickness in the central Arctic. Our goals are to calculate the relative contribution of model uncertainty and irreducible error growth to forecast uncertainty and assess the importance of post-processing, and to contrast pan-Arctic forecast uncertainty with regional forecast uncertainty. We find that prior to forecast post-processing, model uncertainty is the main contributor to forecast uncertainty, whereas after forecast post-processing forecast uncertainty is reduced overall, model uncertainty is reduced by an order of magnitude, and irreducible error growth becomes the main contributor to forecast uncertainty. While all models generally agree in their post-processed forecasts of September sea-ice volume and extent, this is not the case for sea-ice concentration. Additionally, forecast uncertainty of sea-ice thickness grows at a much higher rate along Arctic coastlines relative to the central Arctic ocean. Potential ways of offering spatial forecast information based on the timescale over which the forecast signal beats the noise are also explored.

  15. Random Sequence for Optimal Low-Power Laser Generated Ultrasound

    NASA Astrophysics Data System (ADS)

    Vangi, D.; Virga, A.; Gulino, M. S.

    2017-08-01

    Low-power laser generated ultrasounds are lately gaining importance in the research world, thanks to the possibility of investigating a mechanical component structural integrity through a non-contact and Non-Destructive Testing (NDT) procedure. The ultrasounds are, however, very low in amplitude, making it necessary to use pre-processing and post-processing operations on the signals to detect them. The cross-correlation technique is used in this work, meaning that a random signal must be used as laser input. For this purpose, a highly random and simple-to-create code called T sequence, capable of enhancing the ultrasound detectability, is introduced (not previously available at the state of the art). Several important parameters which characterize the T sequence can influence the process: the number of pulses Npulses , the pulse duration δ and the distance between pulses dpulses . A Finite Element FE model of a 3 mm steel disk has been initially developed to analytically study the longitudinal ultrasound generation mechanism and the obtainable outputs. Later, experimental tests have shown that the T sequence is highly flexible for ultrasound detection purposes, making it optimal to use high Npulses and δ but low dpulses . In the end, apart from describing all phenomena that arise in the low-power laser generation process, the results of this study are also important for setting up an effective NDT procedure using this technology.

  16. Acoustic Data Processing and Transient Signal Analysis for the Hybrid Wing Body 14- by 22-Foot Subsonic Wind Tunnel Test

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Brooks, Thomas F.; Humphreys, William M.; Spalt, Taylor B.; Stead, Daniel J.

    2014-01-01

    An advanced vehicle concept, the HWB N2A-EXTE aircraft design, was tested in NASA Langley's 14- by 22-Foot Subsonic Wind Tunnel to study its acoustic characteristics for var- ious propulsion system installation and airframe con gurations. A signi cant upgrade to existing data processing systems was implemented, with a focus on portability and a re- duction in turnaround time. These requirements were met by updating codes originally written for a cluster environment and transferring them to a local workstation while en- abling GPU computing. Post-test, additional processing of the time series was required to remove transient hydrodynamic gusts from some of the microphone time series. A novel automated procedure was developed to analyze and reject contaminated blocks of data, under the assumption that the desired acoustic signal of interest was a band-limited sta- tionary random process, and of lower variance than the hydrodynamic contamination. The procedure is shown to successfully identify and remove contaminated blocks of data and retain the desired acoustic signal. Additional corrections to the data, mainly background subtraction, shear layer refraction calculations, atmospheric attenuation and microphone directivity corrections, were all necessary for initial analysis and noise assessments. These were implemented for the post-processing of spectral data, and are shown to behave as expected.

  17. Addendum: ``Hard X-Rays and Gamma Rays from Type Ia Supernovae'' (ApJ, 492, 228 [1998])

    NASA Astrophysics Data System (ADS)

    Höflich, Peter; Wheeler, J. C.

    2004-04-01

    We report a subtle error in the normalization of the absolute flux published in our original article (hereafter HWK98), and some minor updates. The normalization problem is related to the post-processing. As a consequence, the reported line fluxes are too large at early times. Note that Figure 1 of P. Höflich (ApJ, 492, 228 [1998]) has been transferred from HWK98. Results of previous papers are not affected (E. Müller, P. Höflich, A. M. Khokhlov, & E. Müller, ApJ, 492, 228 [1998]; P. Höflich, A. M. Khokhlov, & E. Müller, ApJ, 492, 228 [1998]). For calculating the γ-ray spectra, the γ-ray transport is solved via a Monte Carlo code that produces an output file containing the Eddington flux, the energy input by radioactive decay and escape probability, ζ, of γ-ray photons. In a postprocessing step, the spectrum is renormalized and convolved with the instrumental response function of the γ-ray telescope. A two-step procedure is used to obtain the emergent spectra to separate the CPU-intensive Monte Carlo transport calculation from the ``fast'' second step, allowing us to study the influence of the instrument on the observables (e.g., E. Müller, P. Höflich, A. M. Khokhlov, & E. Müller, ApJ, 492, 228 [1998

  18. Bibliographic Post-Processing with the TIS Intelligent Gateway: Analytical and Communication Capabilities.

    ERIC Educational Resources Information Center

    Burton, Hilary D.

    TIS (Technology Information System) is an intelligent gateway system capable of performing quantitative evaluation and analysis of bibliographic citations using a set of Process functions. Originally developed by Lawrence Livermore National Laboratory (LLNL) to analyze information retrieved from three major federal databases, DOE/RECON,…

  19. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  20. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    NASA Astrophysics Data System (ADS)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological dynamic and processes, i. e. sample heterogeneity. For a same streamflow range corresponds different processes such as rising limbs or recession, where uncertainties are different. The dynamical approach improves reliability, skills and sharpness of forecasts and globally reduces confidence intervals width. When compared in details, the dynamical approach allows a noticeable reduction of confidence intervals during recessions where uncertainty is relatively lower and a slight increase of confidence intervals during rising limbs or snowmelt where uncertainty is greater. The dynamic approach, validated by forecaster's experience that considered the empirical approach not discriminative enough, improved forecaster's confidence and communication of uncertainties. Montanari, A. and Brath, A., (2004). A stochastic approach for assessing the uncertainty of rainfall-runoff simulations. Water Resources Research, 40, W01106, doi:10.1029/2003WR002540. Schaefli, B., Balin Talamba, D. and Musy, A., (2007). Quantifying hydrological modeling errors through a mixture of normal distributions. Journal of Hydrology, 332, 303-315.

  1. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  2. Adaptive clustering procedure for continuous gravitational wave searches

    NASA Astrophysics Data System (ADS)

    Singh, Avneet; Papa, Maria Alessandra; Eggenstein, Heinz-Bernd; Walsh, Sinéad

    2017-10-01

    In hierarchical searches for continuous gravitational waves, clustering of candidates is an important post-processing step because it reduces the number of noise candidates that are followed up at successive stages [J. Aasi et al., Phys. Rev. Lett. 88, 102002 (2013), 10.1103/PhysRevD.88.102002; B. Behnke, M. A. Papa, and R. Prix, Phys. Rev. D 91, 064007 (2015), 10.1103/PhysRevD.91.064007; M. A. Papa et al., Phys. Rev. D 94, 122006 (2016), 10.1103/PhysRevD.94.122006]. Previous clustering procedures bundled together nearby candidates ascribing them to the same root cause (be it a signal or a disturbance), based on a predefined cluster volume. In this paper, we present a procedure that adapts the cluster volume to the data itself and checks for consistency of such volume with what is expected from a signal. This significantly improves the noise rejection capabilities at fixed detection threshold, and at fixed computing resources for the follow-up stages, this results in an overall more sensitive search. This new procedure was employed in the first Einstein@Home search on data from the first science run of the advanced LIGO detectors (O1) [LIGO Scientific Collaboration and Virgo Collaboration, arXiv:1707.02669 [Phys. Rev. D (to be published)

  3. MetaboLyzer: A Novel Statistical Workflow for Analyzing Post-Processed LC/MS Metabolomics Data

    PubMed Central

    Mak, Tytus D.; Laiakis, Evagelia C.; Goudarzi, Maryam; Fornace, Albert J.

    2014-01-01

    Metabolomics, the global study of small molecules in a particular system, has in the last few years risen to become a primary –omics platform for the study of metabolic processes. With the ever-increasing pool of quantitative data yielded from metabolomic research, specialized methods and tools with which to analyze and extract meaningful conclusions from these data are becoming more and more crucial. Furthermore, the depth of knowledge and expertise required to undertake a metabolomics oriented study is a daunting obstacle to investigators new to the field. As such, we have created a new statistical analysis workflow, MetaboLyzer, which aims to both simplify analysis for investigators new to metabolomics, as well as provide experienced investigators the flexibility to conduct sophisticated analysis. MetaboLyzer’s workflow is specifically tailored to the unique characteristics and idiosyncrasies of postprocessed liquid chromatography/mass spectrometry (LC/MS) based metabolomic datasets. It utilizes a wide gamut of statistical tests, procedures, and methodologies that belong to classical biostatistics, as well as several novel statistical techniques that we have developed specifically for metabolomics data. Furthermore, MetaboLyzer conducts rapid putative ion identification and putative biologically relevant analysis via incorporation of four major small molecule databases: KEGG, HMDB, Lipid Maps, and BioCyc. MetaboLyzer incorporates these aspects into a comprehensive workflow that outputs easy to understand statistically significant and potentially biologically relevant information in the form of heatmaps, volcano plots, 3D visualization plots, correlation maps, and metabolic pathway hit histograms. For demonstration purposes, a urine metabolomics data set from a previously reported radiobiology study in which samples were collected from mice exposed to gamma radiation was analyzed. MetaboLyzer was able to identify 243 statistically significant ions out of a total of 1942. Numerous putative metabolites and pathways were found to be biologically significant from the putative ion identification workflow. PMID:24266674

  4. Readily prepared biodegradable nanoparticles to formulate poorly water soluble drugs improving their pharmacological properties: The example of trabectedin.

    PubMed

    Capasso Palmiero, Umberto; Morosi, Lavinia; Bello, Ezia; Ponzo, Marianna; Frapolli, Roberta; Matteo, Cristina; Ferrari, Mariella; Zucchetti, Massimo; Minoli, Lucia; De Maglie, Marcella; Romanelli, Pierpaolo; Morbidelli, Massimo; D'Incalci, Maurizio; Moscatelli, Davide

    2018-04-28

    The improvement of the pharmacological profile of lipophilic drug formulations is one of the main successes achieved using nanoparticles (NPs) in medicine. However, the complex synthesis procedure and numerous post-processing steps hamper the cost-effective use of these formulations. In this work, an approach which requires only a syringe to produce self-assembling biodegradable and biocompatible poly(caprolactone)-based NPs is developed. The effective synthesis of monodisperse NPs has been made possible by the optimization of the block-copolymer synthesized via a combination of ring opening polymerization and reversible addition-fragmentation chain transfer polymerization. These NPs can be used to formulate lipophilic drugs that are barely soluble in water, such as trabectedin, a potent anticancer therapeutic. Its biodistribution and antitumor activity have been compared with the commercially available formulation Yondelis®. The results indicate that this trabectedin NP formulation performs with the same antitumor activity as Yondelis®, but does not have the drawback of severe local vascular toxicity in the injection site. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Blood Vessel Extraction in Color Retinal Fundus Images with Enhancement Filtering and Unsupervised Classification

    PubMed Central

    2017-01-01

    Retinal blood vessels have a significant role in the diagnosis and treatment of various retinal diseases such as diabetic retinopathy, glaucoma, arteriosclerosis, and hypertension. For this reason, retinal vasculature extraction is important in order to help specialists for the diagnosis and treatment of systematic diseases. In this paper, a novel approach is developed to extract retinal blood vessel network. Our method comprises four stages: (1) preprocessing stage in order to prepare dataset for segmentation; (2) an enhancement procedure including Gabor, Frangi, and Gauss filters obtained separately before a top-hat transform; (3) a hard and soft clustering stage which includes K-means and Fuzzy C-means (FCM) in order to get binary vessel map; and (4) a postprocessing step which removes falsely segmented isolated regions. The method is tested on color retinal images obtained from STARE and DRIVE databases which are available online. As a result, Gabor filter followed by K-means clustering method achieves 95.94% and 95.71% of accuracy for STARE and DRIVE databases, respectively, which are acceptable for diagnosis systems. PMID:29065611

  6. An automated method to find reaction mechanisms and solve the kinetics in organometallic catalysis.

    PubMed

    Varela, J A; Vázquez, S A; Martínez-Núñez, E

    2017-05-01

    A novel computational method is proposed in this work for use in discovering reaction mechanisms and solving the kinetics of transition metal-catalyzed reactions. The method does not rely on either chemical intuition or assumed a priori mechanisms, and it works in a fully automated fashion. Its core is a procedure, recently developed by one of the authors, that combines accelerated direct dynamics with an efficient geometry-based post-processing algorithm to find transition states (Martinez-Nunez, E., J. Comput. Chem. 2015 , 36 , 222-234). In the present work, several auxiliary tools have been added to deal with the specific features of transition metal catalytic reactions. As a test case, we chose the cobalt-catalyzed hydroformylation of ethylene because of its well-established mechanism, and the fact that it has already been used in previous automated computational studies. Besides the generally accepted mechanism of Heck and Breslow, several side reactions, such as hydrogenation of the alkene, emerged from our calculations. Additionally, the calculated rate law for the hydroformylation reaction agrees reasonably well with those obtained in previous experimental and theoretical studies.

  7. Comparative study of CW, nanosecond- and femtosecond-pulsed laser microcutting of AZ31 magnesium alloy stents.

    PubMed

    Gökhan Demir, Ali; Previtali, Barbara

    2014-06-01

    Magnesium alloys constitute an interesting solution for cardiovascular stents due to their biocompatibility and biodegradability in human body. Laser microcutting is the industrially accepted method for stent manufacturing. However, the laser-material interaction should be well investigated to control the quality characteristics of the microcutting process that concern the surface roughness, chemical composition, and microstructure of the final device. Despite the recent developments in industrial laser systems, a universal laser source that can be manipulated flexibly in terms of process parameters is far from reality. Therefore, comparative studies are required to demonstrate processing capabilities. In particular, the laser pulse duration is a key factor determining the processing regime. This work approaches the laser microcutting of AZ31 Mg alloy from the perspective of a comparative study to evaluate the machining capabilities in continuous wave (CW), ns- and fs-pulsed regimes. Three industrial grade machining systems were compared to reach a benchmark in machining quality, productivity, and ease of postprocessing. The results confirmed that moving toward the ultrashort pulse domain the machining quality increases, but the need for postprocessing remains. The real advantage of ultrashort pulsed machining was the ease in postprocessing and maintaining geometrical integrity of the stent mesh after chemical etching. Resultantly, the overall production cycle time was shortest for fs-pulsed laser system, despite the fact that CW laser system provided highest cutting speed.

  8. Assimilating the Future for Better Forecasts and Earlier Warnings

    NASA Astrophysics Data System (ADS)

    Du, H.; Wheatcroft, E.; Smith, L. A.

    2016-12-01

    Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.

  9. Channel mapping river miles 29–62 of the Colorado River in Grand Canyon National Park, Arizona, May 2009

    USGS Publications Warehouse

    Kaplinski, Matt; Hazel, Joseph E.; Grams, Paul E.; Kohl, Keith; Buscombe, Daniel D.; Tusso, Robert B.

    2017-03-23

    Bathymetric, topographic, and grain-size data were collected in May 2009 along a 33-mi reach of the Colorado River in Grand Canyon National Park, Arizona. The study reach is located from river miles 29 to 62 at the confluence of the Colorado and Little Colorado Rivers. Channel bathymetry was mapped using multibeam and singlebeam echosounders, subaerial topography was mapped using ground-based total-stations, and bed-sediment grain-size data were collected using an underwater digital microscope system. These data were combined to produce digital elevation models, spatially variable estimates of digital elevation model uncertainty, georeferenced grain-size data, and bed-sediment distribution maps. This project is a component of a larger effort to monitor the status and trends of sand storage along the Colorado River in Grand Canyon National Park. This report documents the survey methods and post-processing procedures, digital elevation model production and uncertainty assessment, and procedures for bed-sediment classification, and presents the datasets resulting from this study.

  10. A phase space model of Fourier ptychographic microscopy

    PubMed Central

    Horstmeyer, Roarke; Yang, Changhuei

    2014-01-01

    A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography’s and FPM’s captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment. PMID:24514995

  11. Mass-conservative reconstruction of Galerkin velocity fields for transport simulations

    NASA Astrophysics Data System (ADS)

    Scudeler, C.; Putti, M.; Paniconi, C.

    2016-08-01

    Accurate calculation of mass-conservative velocity fields from numerical solutions of Richards' equation is central to reliable surface-subsurface flow and transport modeling, for example in long-term tracer simulations to determine catchment residence time distributions. In this study we assess the performance of a local Larson-Niklasson (LN) post-processing procedure for reconstructing mass-conservative velocities from a linear (P1) Galerkin finite element solution of Richards' equation. This approach, originally proposed for a-posteriori error estimation, modifies the standard finite element velocities by imposing local conservation on element patches. The resulting reconstructed flow field is characterized by continuous fluxes on element edges that can be efficiently used to drive a second order finite volume advective transport model. Through a series of tests of increasing complexity that compare results from the LN scheme to those using velocity fields derived directly from the P1 Galerkin solution, we show that a locally mass-conservative velocity field is necessary to obtain accurate transport results. We also show that the accuracy of the LN reconstruction procedure is comparable to that of the inherently conservative mixed finite element approach, taken as a reference solution, but that the LN scheme has much lower computational costs. The numerical tests examine steady and unsteady, saturated and variably saturated, and homogeneous and heterogeneous cases along with initial and boundary conditions that include dry soil infiltration, alternating solute and water injection, and seepage face outflow. Typical problems that arise with velocities derived from P1 Galerkin solutions include outgoing solute flux from no-flow boundaries, solute entrapment in zones of low hydraulic conductivity, and occurrences of anomalous sources and sinks. In addition to inducing significant mass balance errors, such manifestations often lead to oscillations in concentration values that can moreover cause the numerical solution to explode. These problems do not occur when using LN post-processed velocities.

  12. Living in the Post-Process Writing Center

    ERIC Educational Resources Information Center

    Shafer, Gregory

    2012-01-01

    In this article, the author talks about the college writing center, which is a place of political confrontation, where cultural issues involving dialect and values are probed, contested, and negotiated. He suggests a post-process approach to composition--one that ushers writers into a world of exploration and social engagement--one that transcends…

  13. A Robust Post-Processing Workflow for Datasets with Motion Artifacts in Diffusion Kurtosis Imaging

    PubMed Central

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X.; Wan, Mingxi

    2014-01-01

    Purpose The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). Materials and methods The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). Results The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). Conclusion The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements. PMID:24727862

  14. A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.

    PubMed

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi

    2014-01-01

    The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.

  15. A post-processing method to simulate the generalized RF sheath boundary condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myra, James R.; Kohno, Haruhiko

    For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less

  16. A post-processing method to simulate the generalized RF sheath boundary condition

    DOE PAGES

    Myra, James R.; Kohno, Haruhiko

    2017-10-23

    For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less

  17. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  18. IR-thermography for Quality Prediction in Selective Laser Deburring

    NASA Astrophysics Data System (ADS)

    Möller, Mauritz; Conrad, Christian; Haimerl, Walter; Emmelmann, Claus

    Selective Laser Deburring (SLD) is an innovative edge-refinement process being developed at the Laser Zentrum Nord (LZN) in Hamburg. It offers a wear-free processing of defined radii and bevels at the edges as well as the possibility to deburr several materials with the same laser source. Sheet metal parts of various applications need to be post-processed to remove sharp edges and burrs remaining from the initial production process. Thus, SLD will provide an extended degree of automation for the next generation of manufacturing facilities. This paper investigates the dependence between the deburring result and the temperature field in- and post-process. In order to achieve this, the surface temperature near to the deburred edge is monitored with IR-thermography. Different strategies are discussed for the approach using the IR-information as a quality assurance. Additional experiments are performed to rate the accuracy of the quality prediction method in different deburring applications.

  19. Current role of multidetector computed tomography in imaging of wrist injuries.

    PubMed

    Syed, Mohd Arif; Raj, Vimal; Jeyapalan, Kanagaratnam

    2013-01-01

    Imaging of the wrist is challenging to both radiologists and orthopedic surgeons. This is primarily because of the complex anatomy/functionality of the wrist and also the fact that many frequent injuries are sustained to the hands. On going developments in multidetector computed tomography (MDCT) technology with its "state of the art" postprocessing capabilities have revolutionized this field. Apart from routine imaging of wrist trauma, it is now possible to assess intrinsic ligaments with MDCT arthrography, thereby avoiding invasive diagnostic arthroscopies. Postoperative wrist imaging can be a diagnostic challenge, and MDCT can be helpful in assessment of these cases because volume acquisition and excellent postprocessing abilities help to evaluate these wrists in any desired plane and thinner slices. This article pictorially reviews the current clinical role of MDCT imaging of wrist in our practice. It also describes arthrography technique and scanning parameters used at our center. Copyright © 2013 Mosby, Inc. All rights reserved.

  20. Multi-model seasonal forecast of Arctic sea-ice: forecast uncertainty at pan-Arctic and regional scales

    NASA Astrophysics Data System (ADS)

    Blanchard-Wrigglesworth, E.; Barthélemy, A.; Chevallier, M.; Cullather, R.; Fučkar, N.; Massonnet, F.; Posey, P.; Wang, W.; Zhang, J.; Ardilouze, C.; Bitz, C. M.; Vernieres, G.; Wallcraft, A.; Wang, M.

    2017-08-01

    Dynamical model forecasts in the Sea Ice Outlook (SIO) of September Arctic sea-ice extent over the last decade have shown lower skill than that found in both idealized model experiments and hindcasts of previous decades. Additionally, it is unclear how different model physics, initial conditions or forecast post-processing (bias correction) techniques contribute to SIO forecast uncertainty. In this work, we have produced a seasonal forecast of 2015 Arctic summer sea ice using SIO dynamical models initialized with identical sea-ice thickness in the central Arctic. Our goals are to calculate the relative contribution of model uncertainty and irreducible error growth to forecast uncertainty and assess the importance of post-processing, and to contrast pan-Arctic forecast uncertainty with regional forecast uncertainty. We find that prior to forecast post-processing, model uncertainty is the main contributor to forecast uncertainty, whereas after forecast post-processing forecast uncertainty is reduced overall, model uncertainty is reduced by an order of magnitude, and irreducible error growth becomes the main contributor to forecast uncertainty. While all models generally agree in their post-processed forecasts of September sea-ice volume and extent, this is not the case for sea-ice concentration. Additionally, forecast uncertainty of sea-ice thickness grows at a much higher rate along Arctic coastlines relative to the central Arctic ocean. Potential ways of offering spatial forecast information based on the timescale over which the forecast signal beats the noise are also explored.

  1. Microfocal X-ray computed tomography post-processing operations for optimizing reconstruction volumes of stented arteries during 3D computational fluid dynamics modeling.

    PubMed

    Ladisa, John F; Olson, Lars E; Ropella, Kristina M; Molthen, Robert C; Haworth, Steven T; Kersten, Judy R; Warltier, David C; Pagel, Paul S

    2005-08-01

    Restenosis caused by neointimal hyperplasia (NH) remains an important clinical problem after stent implantation. Restenosis varies with stent geometry, and idealized computational fluid dynamics (CFD) models have indicated that geometric properties of the implanted stent may differentially influence NH. However, 3D studies capturing the in vivo flow domain within stented vessels have not been conducted at a resolution sufficient to detect subtle alterations in vascular geometry caused by the stent and the subsequent temporal development of NH. We present the details and limitations of a series of post-processing operations used in conjunction with microfocal X-ray CT imaging and reconstruction to generate geometrically accurate flow domains within the localized region of a stent several weeks after implantation. Microfocal X-ray CT reconstruction volumes were subjected to an automated program to perform arterial thresholding, spatial orientation, and surface smoothing of stented and unstented rabbit iliac arteries several weeks after antegrade implantation. A transfer function was obtained for the current post-processing methodology containing reconstructed 16 mm stents implanted into rabbit iliac arteries for up to 21 days after implantation and resolved at circumferential and axial resolutions of 32 and 50 microm, respectively. The results indicate that the techniques presented are sufficient to resolve distributions of WSS with 80% accuracy in segments containing 16 surface perturbations over a 16 mm stented region. These methods will be used to test the hypothesis that reductions in normalized wall shear stress (WSS) and increases in the spatial disparity of WSS immediately after stent implantation may spatially correlate with the temporal development of NH within the stented region.

  2. Development of Eulerian Code Modeling for ICF Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Paul A.

    2014-02-27

    One of the most pressing unexplained phenomena standing in the way of ICF ignition is understanding mix and how it interacts with burn. Experiments were being designed and fielded as part of the Defect-Induced Mix Experiment (DIME) project to obtain data about the extent of material mix and how this mix influenced burn. Experiments on the Omega laser and National Ignition Facility (NIF) provided detailed data for comparison to the Eulerian code RAGE1. The Omega experiments were able to resolve the mix and provide “proof of principle” support for subsequent NIF experiments, which were fielded from July 2012 through Junemore » 2013. The Omega shots were fired at least once per year between 2009 and 2012. RAGE was not originally designed to model inertial confinement fusion (ICF) implosions. It still lacks lasers, so the code has been validated using an energy source. To test RAGE, the simulation output is compared to data and by means of postprocessing tools that were developed. Here, the various postprocessing tools are described with illustrative examples.« less

  3. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  4. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative impact on the calibration error, but makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.

  5. Development and Evaluation of a Spectral Analysis Method to Eliminate Organic Interference with Cavity Ring-Down Measurements of Water Isotope Ratios.

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Kim-Hak, D.; Popp, B. N.; Wallsgrove, N.; Kagawa-Viviani, A.; Johnson, J.

    2017-12-01

    Cavity ring-down spectroscopy (CRDS) is a technology based on the spectral absorption of gas molecules of interest at specific spectral regions. The CRDS technique enables the analysis of hydrogen and oxygen stable isotope ratios of water by directly measuring individual isotopologue absorption peaks such as H16OH, H18OH, and D16OH. Early work demonstrated that the accuracy of isotope analysis by CRDS and other laser-based absorption techniques could be compromised by spectral interference from organic compounds, in particular methanol and ethanol, which can be prevalent in ecologically-derived waters. There have been several methods developed by various research groups including Picarro to address the organic interference challenge. Here, we describe an organic fitter and a post-processing algorithm designed to improve the accuracy of the isotopic analysis of the "organic contaminated" water specifically for Picarro models L2130-i and L2140-i. To create the organic fitter, the absorption features of methanol around 7200 cm-1 were characterized and incorporated into spectral analysis. Since there was residual interference remaining after applying the organic fitter, a statistical model was also developed for post-processing correction. To evaluate the performance of the organic fitter and the postprocessing correction, we conducted controlled experiments on the L2130-i for two water samples with different isotope ratios blended with varying amounts of methanol (0-0.5%) and ethanol (0-5%). When the original fitter was not used for spectral analysis, the addition of 0.5% methanol changed the apparent isotopic composition of the water samples by +62‰ for δ18O values and +97‰ for δ2H values, and the addition of 5% ethanol changed the apparent isotopic composition by -0.5‰ for δ18O values and -3‰ for δ2H values. When the organic fitter was used for spectral analysis, the maximum methanol-induced errors were reduced to +4‰ for δ18O values and +5‰ for δ2H values, and the maximum ethanol-induced errors were unchanged. When the organic fitter was combined with the post-processing correction, up to 99.8% of the total methanol-induced errors and 96% of the total ethanol-induced errors could be corrected. The applicability of the algorithm to natural samples such as plant and soil waters will be investigated.

  6. The development of algorithms for the deployment of new version of GEM-detector-based acquisition system

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał D.; Czarski, Tomasz; Kolasiński, Piotr; Linczuk, Paweł; Poźniak, Krzysztof T.; Chernyshova, Maryna; Kasprowicz, Grzegorz; Wojeński, Andrzej; Zabolotny, Wojciech; Zienkiewicz, Paweł

    2016-09-01

    This article is an overview of what has been implemented in the process of development and testing the GEM detector based acquisition system in terms of post-processing algorithms. Information is given on mex functions for extended statistics collection, unified hex topology and optimized S-DAQ algorithm for splitting overlapped signals. Additional discussion on bottlenecks and major factors concerning optimization is presented.

  7. Automated knot detection with visual post-processing of Douglas-fir veneer images

    Treesearch

    C.L. Todoroki; Eini C. Lowell; Dennis Dykstra

    2010-01-01

    Knots on digital images of 51 full veneer sheets, obtained from nine peeler blocks crosscut from two 35-foot (10.7 m) long logs and one 18-foot (5.5 m) log from a single Douglas-fir tree, were detected using a two-phase algorithm. The algorithm was developed using one image, the Development Sheet, refined on five other images, the Training Sheets, and then applied to...

  8. Nuts and Bolts of CEST MR imaging

    PubMed Central

    Liu, Guanshu; Song, Xiaolei; Chan, Kannie W.Y.

    2013-01-01

    Chemical Exchange Saturation Transfer (CEST) has emerged as a novel MRI contrast mechanism that is well suited for molecular imaging studies. This new mechanism can be used to detect small amounts of contrast agent through saturation of rapidly exchanging protons on these agents, allowing a wide range of applications. CEST technology has a number of indispensable features, such as the possibility of simultaneous detection of multiple “colors” of agents and detecting changes in their environment (e.g. pH, metabolites, etc) through MR contrast. Currently a large number of new imaging schemes and techniques have been developed to improve the temporal resolution and specificity and to correct the influence of B0 and B1 inhomogeneities. In this review, the techniques developed over the last decade have been summarized with the different imaging strategies and post-processing methods discussed from a practical point of view including describing their relative merits for detecting CEST agents. The goal of the present work is to provide the reader with a fundamental understanding of the techniques developed, and to provide guidance to help refine future applications of this technology. This review is organized into three main sections: Basics of CEST Contrast, Implementation, Post-Processing, and also includes a brief Introduction section and Summary. The Basics of CEST Contrast section contains a description of the relevant background theory for saturation transfer and frequency labeled transfer, and a brief discussion of methods to determine exchange rates. The Implementation section contains a description of the practical considerations in conducting CEST MRI studies, including choice of magnetic field, pulse sequence, saturation pulse, imaging scheme, and strategies to separate MT and CEST. The Post-Processing section contains a description of the typical image processing employed for B0/B1 correction, Z-spectral interpolation, frequency selective detection, and improving CEST contrast maps. PMID:23303716

  9. TH-CD-209-01: A Greedy Reassignment Algorithm for the PBS Minimum Monitor Unit Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Y; Kooy, H; Craft, D

    2016-06-15

    Purpose: To investigate a Greedy Reassignment algorithm in order to mitigate the effects of low weight spots in proton pencil beam scanning (PBS) treatment plans. Methods: To convert a plan from the treatment planning system’s (TPS) to a deliverable plan, post processing methods can be used to adjust the spot maps to meets the minimum MU constraint. Existing methods include: deleting low weight spots (Cut method), or rounding spots with weight above/below half the limit up/down to the limit/zero (Round method). An alternative method called Greedy Reassignment was developed in this work in which the lowest weight spot in themore » field was removed and its weight reassigned equally among its nearest neighbors. The process was repeated with the next lowest weight spot until all spots in the field were above the MU constraint. The algorithm performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The evaluation criteria were the γ-index pass rate comparing the pre-processed and post-processed dose distributions. A planning metric was further developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. Results: For fields with a gamma pass rate of 90±1%, the metric has a standard deviation equal to 18% of the centroid value. This showed that the metric and γ-index pass rate are correlated for the Greedy Reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy Reassignment method had 1.8 times better metric at 90% pass rate compared to other post-processing methods. Conclusion: We showed that the Greedy Reassignment method yields deliverable plans that are closest to the optimized-without-MU-constraint plan from the TPS. The metric developed in this work could help design the minimum MU threshold with the goal of keeping the γ-index pass rate above an acceptable value.« less

  10. New Navigation Post-Processing Tools for Oceanographic Submersibles

    NASA Astrophysics Data System (ADS)

    Kinsey, J. C.; Whitcomb, L. L.; Yoerger, D. R.; Howland, J. C.; Ferrini, V. L.; Hegrenas, O.

    2006-12-01

    We report the development of Navproc, a new set of software tools for post-processing oceanographic submersible navigation data that exploits previously reported improvements in navigation sensing and estimation (e.g. Eos Trans. AGU, 84(46), Fall Meet. Suppl., Abstract OS32A- 0225, 2003). The development of these tools is motivated by the need to have post-processing software that allows users to compensate for errors in vehicle navigation, recompute the vehicle position, and then save the results for use with quantitative science data (e.g. bathymetric sonar data) obtained during the mission. Navproc does not provide real-time navigation or display of data nor is it capable of high-resolution, three dimensional (3D) data display. Navproc supports the ASCII data formats employed by the vehicles of the National Deep Submergence Facility (NDSF) operated by the Woods Hole Oceanographic Institution (WHOI). Post-processing of navigation data with Navproc is comprised of three tasks. First, data is converted from the logged ASCII file to a binary Matlab file. When loaded into Matlab, each sensor has a data structure containing the time stamped data sampled at the native update rate of the sensor. An additional structure contains the real-time vehicle navigation data. Second, the data can be displayed using a Graphical User Interface (GUI), allowing users to visually inspect the quality of the data and graphically extract portions of the data. Third, users can compensate for errors in the real-time vehicle navigation. Corrections include: (i) manual filtering and median filtering of long baseline (LBL) ranges; (ii) estimation of the Doppler/gyro alignment using previously reported methodologies; and (iii) sound velocity, tide, and LBL transponder corrections. Using these corrections, the Doppler and LBL positions can be recomputed to provide improved estimates of the vehicle position compared to those computed in real-time. The data can be saved in either binary or ASCII formats, allowing it to be merged with quantitative scientific data, such as bathymetric data. Navproc is written in the Matlab programming language, and is supported under the Windows, Macintosh, and Unix operating systems. To date, Navproc has been employed for post processing data from the DSV Alvin Human Occupied Vehicle (HOV), the Jason II/Medea Remotely Operated Vehicle (ROV), and the ABE, Seabed, and Sentry Autonomous Underwater Vehicles (AUVs).

  11. Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation

    NASA Astrophysics Data System (ADS)

    Ragni, Matteo

    There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.

  12. An experimental seasonal hydrological forecasting system over the Yellow River basin – Part 2: The added value from climate forecast models

    DOE PAGES

    Yuan, Xing

    2016-06-22

    This is the second paper of a two-part series on introducing an experimental seasonal hydrological forecasting system over the Yellow River basin in northern China. While the natural hydrological predictability in terms of initial hydrological conditions (ICs) is investigated in a companion paper, the added value from eight North American Multimodel Ensemble (NMME) climate forecast models with a grand ensemble of 99 members is assessed in this paper, with an implicit consideration of human-induced uncertainty in the hydrological models through a post-processing procedure. The forecast skill in terms of anomaly correlation (AC) for 2 m air temperature and precipitation does not necessarily decrease overmore » leads but is dependent on the target month due to a strong seasonality for the climate over the Yellow River basin. As there is more diversity in the model performance for the temperature forecasts than the precipitation forecasts, the grand NMME ensemble mean forecast has consistently higher skill than the best single model up to 6 months for the temperature but up to 2 months for the precipitation. The NMME climate predictions are downscaled to drive the variable infiltration capacity (VIC) land surface hydrological model and a global routing model regionalized over the Yellow River basin to produce forecasts of soil moisture, runoff and streamflow. And the NMME/VIC forecasts are compared with the Ensemble Streamflow Prediction method (ESP/VIC) through 6-month hindcast experiments for each calendar month during 1982–2010. As verified by the VIC offline simulations, the NMME/VIC is comparable to the ESP/VIC for the soil moisture forecasts, and the former has higher skill than the latter only for the forecasts at long leads and for those initialized in the rainy season. The forecast skill for runoff is lower for both forecast approaches, but the added value from NMME/VIC is more obvious, with an increase of the average AC by 0.08–0.2. To compare with the observed streamflow, both the hindcasts from NMME/VIC and ESP/VIC are post-processed through a linear regression model fitted by using VIC offline-simulated streamflow. The post-processed NMME/VIC reduces the root mean squared error (RMSE) from the post-processed ESP/VIC by 5–15 %. And the reduction occurs mostly during the transition from wet to dry seasons. As a result, with the consideration of the uncertainty in the hydrological models, the added value from climate forecast models is decreased especially at short leads, suggesting the necessity of improving the large-scale hydrological models in human-intervened river basins.« less

  13. An experimental seasonal hydrological forecasting system over the Yellow River basin – Part 2: The added value from climate forecast models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Xing

    This is the second paper of a two-part series on introducing an experimental seasonal hydrological forecasting system over the Yellow River basin in northern China. While the natural hydrological predictability in terms of initial hydrological conditions (ICs) is investigated in a companion paper, the added value from eight North American Multimodel Ensemble (NMME) climate forecast models with a grand ensemble of 99 members is assessed in this paper, with an implicit consideration of human-induced uncertainty in the hydrological models through a post-processing procedure. The forecast skill in terms of anomaly correlation (AC) for 2 m air temperature and precipitation does not necessarily decrease overmore » leads but is dependent on the target month due to a strong seasonality for the climate over the Yellow River basin. As there is more diversity in the model performance for the temperature forecasts than the precipitation forecasts, the grand NMME ensemble mean forecast has consistently higher skill than the best single model up to 6 months for the temperature but up to 2 months for the precipitation. The NMME climate predictions are downscaled to drive the variable infiltration capacity (VIC) land surface hydrological model and a global routing model regionalized over the Yellow River basin to produce forecasts of soil moisture, runoff and streamflow. And the NMME/VIC forecasts are compared with the Ensemble Streamflow Prediction method (ESP/VIC) through 6-month hindcast experiments for each calendar month during 1982–2010. As verified by the VIC offline simulations, the NMME/VIC is comparable to the ESP/VIC for the soil moisture forecasts, and the former has higher skill than the latter only for the forecasts at long leads and for those initialized in the rainy season. The forecast skill for runoff is lower for both forecast approaches, but the added value from NMME/VIC is more obvious, with an increase of the average AC by 0.08–0.2. To compare with the observed streamflow, both the hindcasts from NMME/VIC and ESP/VIC are post-processed through a linear regression model fitted by using VIC offline-simulated streamflow. The post-processed NMME/VIC reduces the root mean squared error (RMSE) from the post-processed ESP/VIC by 5–15 %. And the reduction occurs mostly during the transition from wet to dry seasons. As a result, with the consideration of the uncertainty in the hydrological models, the added value from climate forecast models is decreased especially at short leads, suggesting the necessity of improving the large-scale hydrological models in human-intervened river basins.« less

  14. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  15. Custom modular electromagnetic induction system for shallow electrical conductivity measurements

    NASA Astrophysics Data System (ADS)

    Mester, Achim; Zimmermann, Egon; Tan, Xihe; von Hebel, Christian; van der Kruk, Jan; van Waasen, Stefan

    2017-04-01

    Electromagnetic induction (EMI) is a contactless measurement method that offers fast and easy investigations of the shallow electrical conductivity, e.g. on the field-scale. Available frequency domain EMI systems offer multiple fixed transmitter-receiver (Tx-Rx) pairs with Tx-Rx separations between 0.3 and 4.0 m and investigation depths of up to six meters. Here, we present our custom EMI system that consists of modular sensor units that can either be transmitters or receivers, and a backpack containing the data acquisition system. The prototype system is optimized for frequencies between 5 and 30 kHz and Tx-Rx separations between 0.4 and 2.0 m. Each Tx and Rx signal is digitized separately and stored on a notebook computer. The soil conductivity information is determined after the measurements with advanced digital processing of the data using optimized correction and calibration procedures. The system stores the raw data throughout the entire procedure, which offers many advantages: (1) comprehensive accuracy and error analysis as well as the reproducibility of corrections and calibration procedures; (2) easy customizability of the number of Tx-/Rx-units and their arrangement and frequencies; (3) signals from simultaneously working transmitters can be separated within the received data using orthogonal signals, resulting in additional Tx-Rx pairs and maximized soil information; and (4) later improvements in the post-processing algorithms can be applied to old data sets. Exemplary, here we present an innovative setup with two transmitters and five receivers using orthogonal signals yielding ten Tx-Rx pairs. Note that orthogonal signals enable for redundant Tx-Rx pairs that are useful for verification of the transmitter signals and for data stacking. In contrast to commercial systems, only adjustments in the post-processing were necessary to realize such measurement configurations with flexibly combined Tx and Rx modules. The presented system reaches an accuracy of up to 1 mS/m and was also evaluated by surface measurements with the sensor modules mounted to a sled and moved along a bare soil field transect. Measured data were calibrated for quantitative apparent electrical conductivity using reference data at certain calibration locations. Afterwards, data were inverted for electrical conductivity over depth using a multi-layer inversion showing similar conductivity distributions as the reference data.

  16. Unequal-Arm Interferometry and Ranging in Space

    NASA Technical Reports Server (NTRS)

    Tinto, Massimo

    2005-01-01

    Space-borne interferometric gravitational wave detectors, sensitive in the low-frequency (millihertz) band, will fly in the next decade. In these detectors the spacecraft-to-spacecraft light-traveltimes will necessarily be unequal, time-varying, and (due to aberration) have different time delays on up- and down-links. By using knowledge of the inter-spacecraft light-travel-times and their time evolution it is possible to cancel in post-processing the otherwise dominant laser phase noise and obtain a variety of interferometric data combinations sensitive to gravitational radiation. This technique, which has been named Time-Delay Interferometry (TDI), can be implemented with constellations of three or more formation-flying spacecraft that coherently track each other. As an example application we consider the Laser Interferometer Space Antenna (LISA) mission and show that TDI combinations can be synthesized by properly time-shifting and linearly combining the phase measurements performed on board the three spacecraft. Since TDI exactly suppresses the laser noises when the delays coincide with the light-travel-times, we then show that TDI can also be used for estimating the time-delays needed for its implementation. This is done by performing a post-processing non-linear minimization procedure, which provides an effective, powerful, and simple way for making measurements of the inter-spacecraft light-travel-times. This processing technique, named Time-Delay Interferometric Ranging (TDIR), is highly accurate in estimating the time-delays and allows TDI to be successfully implemented without the need of a dedicated ranging subsystem.

  17. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    DTIC Science & Technology

    2017-09-01

    efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components

  18. Optimization of IVF pregnancy outcomes with donor spermatozoa.

    PubMed

    Wang, Jeff G; Douglas, Nataki C; Prosser, Robert; Kort, Daniel; Choi, Janet M; Sauer, Mark V

    2009-03-01

    To identify risk factors for suboptimal IVF outcomes using insemination with donor spermatozoa and to define a lower threshold that may signal a conversion to fertilization by ICSI rather than insemination. Retrospective, age-matched, case-control study of women undergoing non-donor oocyte IVF cycles using either freshly ejaculated (N=138) or cryopreserved donor spermatozoa (N=69). Associations between method of fertilization, semen sample parameters, and pregnancy rates were analyzed. In vitro fertilization of oocytes with donor spermatozoa by insemination results in equivalent fertilization and pregnancy rates compared to those of freshly ejaculated spermatozoa from men with normal semen analyses when the post-processing motility is greater than or equal to 88%. IVF by insemination with donor spermatozoa when the post-processing motility is less than 88% is associated with a 5-fold reduction in pregnancy rates when compared to those of donor spermatozoa above this motility threshold. When the post-processing donor spermatozoa motility is low, fertilization by ICSI is associated with significantly higher pregnancy rates compared to those of insemination. While ICSI does not need to be categorically instituted when using donor spermatozoa in IVF, patients should be counseled that conversion from insemination to ICSI may be recommended based on low post-processing motility.

  19. Developing a semi/automated protocol to post-process large volume, High-resolution airborne thermal infrared (TIR) imagery for urban waste heat mapping

    NASA Astrophysics Data System (ADS)

    Rahman, Mir Mustafizur

    In collaboration with The City of Calgary 2011 Sustainability Direction and as part of the HEAT (Heat Energy Assessment Technologies) project, the focus of this research is to develop a semi/automated 'protocol' to post-process large volumes of high-resolution (H-res) airborne thermal infrared (TIR) imagery to enable accurate urban waste heat mapping. HEAT is a free GeoWeb service, designed to help Calgary residents improve their home energy efficiency by visualizing the amount and location of waste heat leaving their homes and communities, as easily as clicking on their house in Google Maps. HEAT metrics are derived from 43 flight lines of TABI-1800 (Thermal Airborne Broadband Imager) data acquired on May 13--14, 2012 at night (11:00 pm--5:00 am) over The City of Calgary, Alberta (˜825 km 2) at a 50 cm spatial resolution and 0.05°C thermal resolution. At present, the only way to generate a large area, high-spatial resolution TIR scene is to acquire separate airborne flight lines and mosaic them together. However, the ambient sensed temperature within, and between flight lines naturally changes during acquisition (due to varying atmospheric and local micro-climate conditions), resulting in mosaicked images with different temperatures for the same scene components (e.g. roads, buildings), and mosaic join-lines arbitrarily bisect many thousands of homes. In combination these effects result in reduced utility and classification accuracy including, poorly defined HEAT Metrics, inaccurate hotspot detection and raw imagery that are difficult to interpret. In an effort to minimize these effects, three new semi/automated post-processing algorithms (the protocol) are described, which are then used to generate a 43 flight line mosaic of TABI-1800 data from which accurate Calgary waste heat maps and HEAT metrics can be generated. These algorithms (presented as four peer-reviewed papers)---are: (a) Thermal Urban Road Normalization (TURN)---used to mitigate the microclimatic variability within a thermal flight line based on varying road temperatures; (b) Automated Polynomial Relative Radiometric Normalization (RRN)---which mitigates the between flight line radiometric variability; and (c) Object Based Mosaicking (OBM)---which minimizes the geometric distortion along the mosaic edge between each flight line. A modified Emissivity Modulation technique is also described to correct H-res TIR images for emissivity. This combined radiometric and geometric post-processing protocol (i) increases the visual agreement between TABI-1800 flight lines, (ii) improves radiometric agreement within/between flight lines, (iii) produces a visually seamless mosaic, (iv) improves hot-spot detection and landcover classification accuracy, and (v) provides accurate data for thermal-based HEAT energy models. Keywords: Thermal Infrared, Post-Processing, High Spatial Resolution, Airborne, Thermal Urban Road Normalization (TURN), Relative Radiometric Normalization (RRN), Object Based Mosaicking (OBM), TABI-1800, HEAT, and Automation.

  20. Eleventh NASTRAN User's Colloquium

    NASA Technical Reports Server (NTRS)

    1983-01-01

    NASTRAN (NASA STRUCTURAL ANALYSIS) is a large, comprehensive, nonproprietary, general purpose finite element computer code for structural analysis which was developed under NASA sponsorship. The Eleventh Colloquium provides some comprehensive general papers on the application of finite element methods in engineering, comparisons with other approaches, unique applications, pre- and post-processing or auxiliary programs, and new methods of analysis with NASTRAN.

  1. Long-Baseline Comparisons of the Brazilian National Time Scale to UTC (NIST) Using Near Real-Time and Postprocessed Solutions

    DTIC Science & Technology

    2007-11-01

    long baseline of ~8600 km. The comparisons were made with measurement systems developed for the Sistema Interamericano de Metrologia (SIM) comparison...measurements are compared and summarized. I. INTRODUCTION The Sistema Interamericano de Metrologia (SIM) is a regional metrology organization...Brazil. The two time scales are separated by a long baseline of ~8600 km. The comparisons were made with measurement systems developed for the Sistema

  2. Spot restoration for GPR image post-processing

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  3. Density implications of shift compensation postprocessing in holographic storage systems

    NASA Astrophysics Data System (ADS)

    Menetrier, Laure; Burr, Geoffrey W.

    2003-02-01

    We investigate the effect of data page misregistration, and its subsequent correction in postprocessing, on the storage density of holographic data storage systems. A numerical simulation is used to obtain the bit-error rate as a function of hologram aperture, page misregistration, pixel fill factors, and Gaussian additive intensity noise. Postprocessing of simulated data pages is performed by a nonlinear pixel shift compensation algorithm [Opt. Lett. 26, 542 (2001)]. The performance of this algorithm is analyzed in the presence of noise by determining the achievable areal density. The impact of inaccurate measurements of page misregistration is also investigated. Results show that the shift-compensation algorithm can provide almost complete immunity to page misregistration, although at some penalty to the baseline areal density offered by a system with zero tolerance to misalignment.

  4. Pippi — Painless parsing, post-processing and plotting of posterior and likelihood samples

    NASA Astrophysics Data System (ADS)

    Scott, Pat

    2012-11-01

    Interpreting samples from likelihood or posterior probability density functions is rarely as straightforward as it seems it should be. Producing publication-quality graphics of these distributions is often similarly painful. In this short note I describe pippi, a simple, publicly available package for parsing and post-processing such samples, as well as generating high-quality PDF graphics of the results. Pippi is easily and extensively configurable and customisable, both in its options for parsing and post-processing samples, and in the visual aspects of the figures it produces. I illustrate some of these using an existing supersymmetric global fit, performed in the context of a gamma-ray search for dark matter. Pippi can be downloaded and followed at http://github.com/patscott/pippi.

  5. An Early-Warning System for Volcanic Ash Dispersal: The MAFALDA Procedure

    NASA Astrophysics Data System (ADS)

    Barsotti, S.; Nannipieri, L.; Neri, A.

    2006-12-01

    Forecasts of the dispersal of volcanic ash is a fundamental goal in order to mitigate its potential impact on urbanized areas and transport routes surrounding explosive volcanoes. To this aim we developed an early- warning procedure named MAFALDA (Modeling And Forecasting Ash Loading and Dispersal in the Atmosphere). Such tool is able to quantitatively forecast the atmospheric concentration of ash as well as the ground deposition as a function of time over a 3D spatial domain.\\The main features of MAFALDA are: (1) the use of the hybrid Lagrangian-Eulerian code VOL-CALPUFF able to describe both the rising column phase and the atmospheric dispersal as a function of weather conditions, (2) the use of high-resolution weather forecasting data, (3) the short execution time that allows to analyse a set of scenarios and (4) the web-based CGI software application (written in Perl programming language) that shows the results in a standard graphical web interface and makes it suitable as an early-warning system during volcanic crises.\\MAFALDA is composed by a computational part that simulates the ash cloud dynamics and a graphical interface for visualizing the modelling results. The computational part includes the codes for elaborating the meteorological data, the dispersal code and the post-processing programs. These produces hourly 2D maps of aerial ash concentration at several vertical levels, extension of "threat" area on air and 2D maps of ash deposit on the ground, in addition to graphs of hourly variations of column height.\\The processed results are available on the web by the graphical interface and the users can choose, by drop-down menu, which data to visualize. \\A first partial application of the procedure has been carried out for Mt. Etna (Italy). In this case, the procedure simulates four volcanological scenarios characterized by different plume intensities and uses 48-hrs weather forecasting data with a resolution of 7 km provided by the Italian Air Force.

  6. Dental scanning in CAD/CAM technologies: laser beams

    NASA Astrophysics Data System (ADS)

    Sinescu, Cosmin; Negrutiu, Meda; Faur, Nicolae; Negru, Radu; Romînu, Mihai; Cozarov, Dalibor

    2008-02-01

    Scanning, also called digitizing, is the process of gathering the requisite data from an object. Many different technologies are used to collect three dimensional data. They range from mechanical and very slow, to radiation-based and highly-automated. Each technology has its advantages and disadvantages, and their applications and specifications overlap. The aims of this study are represented by establishing a viable method of digitally representing artifacts of dental casts, proposing a suitable scanner and post-processing software and obtaining 3D Models for the dental applications. The method is represented by the scanning procedure made by different scanners as the implicated materials. Scanners are the medium of data capture. 3D scanners aim to measure and record the relative distance between the object's surface and a known point in space. This geometric data is represented in the form of point cloud data. The contact and no contact scanners were presented. The results show that contact scanning procedures uses a touch probe to record the relative position of points on the objects' surface. This procedure is commonly used in Reverse engineering applications. Its merits are represented by efficiency for objects with low geometric surface detail. Disadvantages are represented by time consuming, this procedure being impractical for artifacts digitization. The non contact scanning procedure implies laser scanning (laser triangulation technology) and photogrammetry. As a conclusion it can be drawn that different types of dental structure needs different types of scanning procedures in order to obtain a competitive complex 3D virtual model that can be used in CAD/CAM technologies.

  7. Impact of post-processing methods on apparent diffusion coefficient values.

    PubMed

    Zeilinger, Martin Georg; Lell, Michael; Baltzer, Pascal Andreas Thomas; Dörfler, Arnd; Uder, Michael; Dietzel, Matthias

    2017-03-01

    The apparent diffusion coefficient (ADC) is increasingly used as a quantitative biomarker in oncological imaging. ADC calculation is based on raw diffusion-weighted imaging (DWI) data, and multiple post-processing methods (PPMs) have been proposed for this purpose. We investigated whether PPM has an impact on final ADC values. Sixty-five lesions scanned with a standardized whole-body DWI-protocol at 3 T served as input data (EPI-DWI, b-values: 50, 400 and 800 s/mm 2 ). Using exactly the same ROI coordinates, four different PPM (ADC_1-ADC_4) were executed to calculate corresponding ADC values, given as [10 -3 mm 2 /s] of each lesion. Statistical analysis was performed to intra-individually compare ADC values stratified by PPM (Wilcoxon signed-rank tests: α = 1 %; descriptive statistics; relative difference/∆; coefficient of variation/CV). Stratified by PPM, mean ADCs ranged from 1.136-1.206 *10 -3 mm 2 /s (∆ = 7.0 %). Variances between PPM were pronounced in the upper range of ADC values (maximum: 2.540-2.763 10 -3 mm 2 /s, ∆ = 8 %). Pairwise comparisons identified significant differences between all PPM (P ≤ 0.003; mean CV = 7.2 %) and reached 0.137 *10 -3 mm 2 /s within the 25th-75th percentile. Altering the PPM had a significant impact on the ADC value. This should be considered if ADC values from different post-processing methods are compared in patient studies. • Post-processing methods significantly influenced ADC values. • The mean coefficient of ADC variation due to PPM was 7.2 %. • To achieve reproducible ADC values, standardization of post-processing is recommended.

  8. Spatially assisted down-track median filter for GPR image post-processing

    DOEpatents

    Paglieroni, David W; Beer, N Reginald

    2014-10-07

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  9. Regional distribution of forest height and biomass from multisensor data fusion

    Treesearch

    Yifan Yu; Sassan Saatch; Linda S. Heath; Elizabeth LaPoint; Ranga Myneni; Yuri Knyazikhin

    2010-01-01

    Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM...

  10. Towards flexible asymmetric MSM structures using Si microwires through contact printing

    NASA Astrophysics Data System (ADS)

    Khan, S.; Lorenzelli, L.; Dahiya, R.

    2017-08-01

    This paper presents development of flexible metal-semiconductor-metal devices using silicon (Si) microwires. Monocrystalline Si in the shape of microwires are used which are developed through standard photolithography and etching. These microwires are assembled on secondary flexible substrates through a dry transfer printing by using a polydimethylsiloxane stamp. The conductive patterns on Si microwires are printed using a colloidal silver nanoparticles based solution and an organic conductor i.e. poly (3,4-ethylene dioxthiophene) doped with poly (styrene sulfonate). A custom developed spray coating technique is used for conductive patterns on Si microwires. A comparative study of the current-voltage (I-V) responses is carried out in flat and bent orientations as well as the response to the light illumination of the wires is explored. Current variations as high as 17.1 μA are recorded going from flat to bend conditions, while the highest I on/I off ratio i.e. 43.8 is achieved with light illuminations. The abrupt changes in the current response due to light-on/off conditions validates these devices for fast flexible photodetector switches. These devices are also evaluated based on transfer procedure i.e. flip-over and stamp-assisted transfer printing for manipulating Si microwires and their subsequent post-processing. These new developments were made to study the most feasible approach for transfer printing of Si microwires and to harvest their capabilities such as photodetection and several other applications in the shape of metal-semiconductor-metal structures.

  11. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    NASA Astrophysics Data System (ADS)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  12. Force measurement-based discontinuity detection during friction stir welding

    DOE PAGES

    Shrivastava, Amber; Zinn, Michael; Duffie, Neil A.; ...

    2017-02-23

    Here, the objective of this work is to develop a method for detecting the creation of discontinuities ( i.e., voids, volume defects) during friction stir welding. Friction stir welding is inherently cost effective, however, the need for significant weld inspection can make the process cost prohibitive. A new approach to weld inspection is required in which an in situ characterization of weld quality can be obtained, reducing the need for postprocess inspection. To this end, friction stir welds with subsurface voids and without voids were created. The subsurface voids were generated by reducing the friction stir tool rotation frequency andmore » increasing the tool traverse speed in order to create “colder” welds. Process forces were measured during welding, and the void sizes were measured postprocess by computerized tomography ( i.e., 3D X-ray imaging). Two parameters, based on frequency domain content and time-domain average of the force signals, were found to be correlated with void size. Criteria for subsurface void detection and size prediction were developed and shown to be in good agreement with experimental observations. Furthermore, with the proper choice of data acquisition system and frequency analyzer the occurrence of subsurface voids can be detected in real time.« less

  13. Force measurement-based discontinuity detection during friction stir welding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Amber; Zinn, Michael; Duffie, Neil A.

    Here, the objective of this work is to develop a method for detecting the creation of discontinuities ( i.e., voids, volume defects) during friction stir welding. Friction stir welding is inherently cost effective, however, the need for significant weld inspection can make the process cost prohibitive. A new approach to weld inspection is required in which an in situ characterization of weld quality can be obtained, reducing the need for postprocess inspection. To this end, friction stir welds with subsurface voids and without voids were created. The subsurface voids were generated by reducing the friction stir tool rotation frequency andmore » increasing the tool traverse speed in order to create “colder” welds. Process forces were measured during welding, and the void sizes were measured postprocess by computerized tomography ( i.e., 3D X-ray imaging). Two parameters, based on frequency domain content and time-domain average of the force signals, were found to be correlated with void size. Criteria for subsurface void detection and size prediction were developed and shown to be in good agreement with experimental observations. Furthermore, with the proper choice of data acquisition system and frequency analyzer the occurrence of subsurface voids can be detected in real time.« less

  14. Data processing has major impact on the outcome of quantitative label-free LC-MS analysis.

    PubMed

    Chawade, Aakash; Sandin, Marianne; Teleman, Johan; Malmström, Johan; Levander, Fredrik

    2015-02-06

    High-throughput multiplexed protein quantification using mass spectrometry is steadily increasing in popularity, with the two major techniques being data-dependent acquisition (DDA) and targeted acquisition using selected reaction monitoring (SRM). However, both techniques involve extensive data processing, which can be performed by a multitude of different software solutions. Analysis of quantitative LC-MS/MS data is mainly performed in three major steps: processing of raw data, normalization, and statistical analysis. To evaluate the impact of data processing steps, we developed two new benchmark data sets, one each for DDA and SRM, with samples consisting of a long-range dilution series of synthetic peptides spiked in a total cell protein digest. The generated data were processed by eight different software workflows and three postprocessing steps. The results show that the choice of the raw data processing software and the postprocessing steps play an important role in the final outcome. Also, the linear dynamic range of the DDA data could be extended by an order of magnitude through feature alignment and a charge state merging algorithm proposed here. Furthermore, the benchmark data sets are made publicly available for further benchmarking and software developments.

  15. Simulations for Improved Imaging of Faint Objects at Maui Space Surveillance Site

    NASA Astrophysics Data System (ADS)

    Holmes, R.; Roggemann, M.; Werth, M.; Lucas, J.; Thompson, D.

    A detailed wave-optics simulation is used in conjunction with advanced post-processing algorithms to explore the trade space between image post-processing and adaptive optics for improved imaging of low signal-to-noise ratio (SNR) targets. Target-based guidestars are required for imaging of most active Earth-orbiting satellites because of restrictions on using laser-backscatter-based guidestars in the direction of such objects. With such target-based guidestars and Maui conditions, it is found that significant reductions in adaptive optics actuator and subaperture density can result in improved imaging of fainter objects. Simulation indicates that elimination of adaptive optics produces sub-optimal results for all of the faint-object cases considered. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

  16. Global gene expression analysis by combinatorial optimization.

    PubMed

    Ameur, Adam; Aurell, Erik; Carlsson, Mats; Westholm, Jakub Orzechowski

    2004-01-01

    Generally, there is a trade-off between methods of gene expression analysis that are precise but labor-intensive, e.g. RT-PCR, and methods that scale up to global coverage but are not quite as quantitative, e.g. microarrays. In the present paper, we show how how a known method of gene expression profiling (K. Kato, Nucleic Acids Res. 23, 3685-3690 (1995)), which relies on a fairly small number of steps, can be turned into a global gene expression measurement by advanced data post-processing, with potentially little loss of accuracy. Post-processing here entails solving an ancillary combinatorial optimization problem. Validation is performed on in silico experiments generated from the FANTOM data base of full-length mouse cDNA. We present two variants of the method. One uses state-of-the-art commercial software for solving problems of this kind, the other a code developed by us specifically for this purpose, released in the public domain under GPL license.

  17. A Fast Smoothing Algorithm for Post-Processing of Surface Reflectance Spectra Retrieved from Airborne Imaging Spectrometer Data

    PubMed Central

    Gao, Bo-Cai; Liu, Ming

    2013-01-01

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022

  18. FACE-IT. A Science Gateway for Food Security Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montella, Raffaele; Kelly, David; Xiong, Wei

    Progress in sustainability science is hindered by challenges in creating and managing complex data acquisition, processing, simulation, post-processing, and intercomparison pipelines. To address these challenges, we developed the Framework to Advance Climate, Economic, and Impact Investigations with Information Technology (FACE-IT) for crop and climate impact assessments. This integrated data processing and simulation framework enables data ingest from geospatial archives; data regridding, aggregation, and other processing prior to simulation; large-scale climate impact simulations with agricultural and other models, leveraging high-performance and cloud computing; and post-processing to produce aggregated yields and ensemble variables needed for statistics, for model intercomparison, and to connectmore » biophysical models to global and regional economic models. FACE-IT leverages the capabilities of the Globus Galaxies platform to enable the capture of workflows and outputs in well-defined, reusable, and comparable forms. We describe FACE-IT and applications within the Agricultural Model Intercomparison and Improvement Project and the Center for Robust Decision-making on Climate and Energy Policy.« less

  19. POST-PROCESSING ANALYSIS FOR THC SEEPAGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y. SUN

    This report describes the selection of water compositions for the total system performance assessment (TSPA) model of results from the thermal-hydrological-chemical (THC) seepage model documented in ''Drift-Scale THC Seepage Model'' (BSC 2004 [DIRS 169856]). The selection has been conducted in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2004 [DIRS 171334]). This technical work plan (TWP) was prepared in accordance with AP-2.27Q, ''Planning for Science Activities''. Section 1.2.3 of the TWP describes planning information pertaining to the technical scope, content, and managementmore » of this report. The post-processing analysis for THC seepage (THC-PPA) documented in this report provides a methodology for evaluating the near-field compositions of water and gas around a typical waste emplacement drift as these relate to the chemistry of seepage, if any, into the drift. The THC-PPA inherits the conceptual basis of the THC seepage model, but is an independently developed process. The relationship between the post-processing analysis and other closely related models, together with their main functions in providing seepage chemistry information for the Total System Performance Assessment for the License Application (TSPA-LA), are illustrated in Figure 1-1. The THC-PPA provides a data selection concept and direct input to the physical and chemical environment (P&CE) report that supports the TSPA model. The purpose of the THC-PPA is further discussed in Section 1.2. The data selection methodology of the post-processing analysis (Section 6.2.1) was initially applied to results of the THC seepage model as presented in ''Drift-Scale THC Seepage Model'' (BSC 2004 [DIRS 169856]). Other outputs from the THC seepage model (DTN: LB0302DSCPTHCS.002 [DIRS 161976]) used in the P&CE (BSC 2004 [DIRS 169860], Section 6.6) were also subjected to the same initial selection. The present report serves as a full documentation of this selection and also provides additional analyses in support of the choice of waters selected for further evaluation in ''Engineered Barrier System: Physical and Chemical Environment'' (BSC 2004 [DIRS 169860], Section 6.6). The work scope for the studies presented in this report is described in the TWP (BSC 2004 [DIRS 171334]) and other documents cited above and can be used to estimate water and gas compositions near waste emplacement drifts. Results presented in this report were submitted to the Technical Data Management System (TDMS) under specific data tracking numbers (DTNs) as listed in Appendix A. The major change from previous selection of results from the THC seepage model is that the THC-PPA now considers data selection in space around the modeled waste emplacement drift, tracking the evolution of pore-water and gas-phase composition at the edge of the dryout zone around the drift. This post-processing analysis provides a scientific background for the selection of potential seepage water compositions.« less

  20. Droplet-Based Segregation and Extraction of Concentrated Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buie, C R; Buckley, P; Hamilton, J

    2007-02-23

    Microfluidic analysis often requires sample concentration and separation techniques to isolate and detect analytes of interest. Complex or scarce samples may also require an orthogonal separation and detection method or off-chip analysis to confirm results. To perform these additional steps, the concentrated sample plug must be extracted from the primary microfluidic channel with minimal sample loss and dilution. We investigated two extraction techniques; injection of immiscible fluid droplets into the sample stream (''capping'''') and injection of the sample into an immiscible fluid stream (''extraction''). From our results we conclude that capping is the more effective partitioning technique. Furthermore, this functionalitymore » enables additional off-chip post-processing procedures such as DNA/RNA microarray analysis, realtime polymerase chain reaction (RT-PCR), and culture growth to validate chip performance.« less

  1. Production of near-full density uranium nitride microspheres with a hot isostatic press

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMurray, Jacob W.; Kiggans, Jr., Jim O.; Helmreich, Grant W.

    Depleted uranium nitride (UN) kernels with diameters ranging from 420 to 858 microns and theoretical densities (TD) between 87 and 91 percent were postprocessed using a hot isostatic press (HIP) in an argon gas media. This treatment was shown to increase the TD up to above 97%. Uranium nitride is highly reactive with oxygen. Therefore, a novel crucible design was implemented to remove impurities in the argon gas via in situ gettering to avoid oxidation of the UN kernels. The density before and after each HIP procedure was calculated from average weight, volume, and ellipticity determined with established characterization techniquesmore » for particle. Furthermore, micrographs confirmed the nearly full densification of the particles using the gettering approach and HIP processing parameters investigated in this work.« less

  2. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less

  3. Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring

    PubMed Central

    Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu

    2013-01-01

    Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551

  4. Improving Robustness of Hydrologic Ensemble Predictions Through Probabilistic Pre- and Post-Processing in Sequential Data Assimilation

    NASA Astrophysics Data System (ADS)

    Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.

    2018-03-01

    Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.

  5. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.

    PubMed

    Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron

    2017-01-01

    Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  6. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank

    2017-04-01

    By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Zhang L, Dobslaw H, Thomas M (2016) Globally gridded terrestrial water storage variations from GRACE satellite gravimetry for hydrometeorological applications. Geophysical Journal International 206(1):368-378, DOI 10.1093/gji/ggw153.

  7. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.

  8. Intraoperative laser speckle contrast imaging for monitoring cerebral blood flow: results from a 10-patient pilot study

    NASA Astrophysics Data System (ADS)

    Richards, Lisa M.; Weber, Erica L.; Parthasarathy, Ashwin B.; Kappeler, Kaelyn L.; Fox, Douglas J.; Dunn, Andrew K.

    2012-02-01

    Monitoring cerebral blood flow (CBF) during neurosurgery can provide important physiological information for a variety of surgical procedures. Although multiple intraoperative vascular monitoring technologies are currently available, a quantitative method that allows for continuous monitoring is still needed. Laser speckle contrast imaging (LSCI) is an optical imaging method with high spatial and temporal resolution that has been widely used to image CBF in animal models in vivo. In this pilot clinical study, we adapted a Zeiss OPMI Pentero neurosurgical microscope to obtain LSCI images by attaching a camera and a laser diode. This LSCI adapted instrument has been used to acquire full field flow images from 10 patients during tumor resection procedures. The patient's ECG was recorded during acquisition and image registration was performed in post-processing to account for pulsatile motion artifacts. Digital photographs confirmed alignment of vasculature and flow images in four cases, and a relative change in blood flow was observed in two patients after bipolar cautery. The LSCI adapted instrument has the capability to produce real-time, full field CBF image maps with excellent spatial resolution and minimal intervention to the surgical procedure. Results from this study demonstrate the feasibility of using LSCI to monitor blood flow during neurosurgery.

  9. Floodplain-mapping With Modern It-instruments

    NASA Astrophysics Data System (ADS)

    Bley, D.; Pasche, E.

    of all natural hazards, floods occur globally most frequently, claim most casualities and cause the biggest economic losses. Reasons are anthropogenic changes (river cor- rection, land surface sealing, waldsterben, climatic changes) combined with a high population density. Counteractions must be the resettlement of human beings away from flood-prone areas, flood controls and environmental monitoring, as well as renat- uralization and provision of retention basins and areas. The consequence, especially if we think of the recent flood-events on the rivers Rhine, Odra and Danube must be a preventive and sustainable flood control. As a consequence the legislator de- manded in the Water Management Act nation-wide floodplain-mapping, to preserve the necessary retention-areas for high water flows and prevent misuses. In this context, water level calculations based on a one-dimensional steady-flow computer model are among the major tasks in hydraulic engineering practice. Bjoernsen Consulting En- gineers developed in cooperation with the Technical University of Hamburg-Harburg the integrated software system WSPWIN. It is based upon state of the art informa- tion technology and latest developments in hydraulic research. WSPWIN consists of a pre-processing module, a calculation core, and GIS-based post-processing elements. As water level calculations require the recording and storage of large amounts of to- pographic and hydraulic data it is helpful that WSPWIN consists of an interactive graphical profile-editor, which allows visual data checking and editing. The calcu- lation program comprises water level calculations under steady uniform and steady non-uniform flow conditions using the formulas of Darcy-Weisbach and Gauckler- Manning-Strickler. Bridges, weirs, pipes as well as the effects of submerged vege- tation are taken into account. Post-processing includes plotting facilities for cross- sectional and longitudinal profiles as well as map-oriented GIS-based data editing and result presentation. Import of digital elevation models and generation of profiles are possible. Furthermore, the intersection of the DEM with the calculated water level en- ables the creation of floodplain maps. WSPWIN is the official standard software for one-dimensional hydraulic modeling in six German Federal States, where it is used by all water-management agencies. Moreover, many private companies, universities and water-associations employ WSPWIN as well. The program is presented showing the procedure and difficulties of floodplain-mapping and flood control on a Bavarian river.

  10. Linear landmark extraction in SAR images with application to augmented integrity aero-navigation: an overview to a novel processing chain

    NASA Astrophysics Data System (ADS)

    Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.

    2011-10-01

    In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.

  11. LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques

    NASA Technical Reports Server (NTRS)

    Thompson, David E.; Thirumalainambi, Rajkumar

    2006-01-01

    This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.

  12. Recent advances in the reconstruction of cranio-maxillofacial defects using computer-aided design/computer-aided manufacturing.

    PubMed

    Oh, Ji-Hyeon

    2018-12-01

    With the development of computer-aided design/computer-aided manufacturing (CAD/CAM) technology, it has been possible to reconstruct the cranio-maxillofacial defect with more accurate preoperative planning, precise patient-specific implants (PSIs), and shorter operation times. The manufacturing processes include subtractive manufacturing and additive manufacturing and should be selected in consideration of the material type, available technology, post-processing, accuracy, lead time, properties, and surface quality. Materials such as titanium, polyethylene, polyetheretherketone (PEEK), hydroxyapatite (HA), poly-DL-lactic acid (PDLLA), polylactide-co-glycolide acid (PLGA), and calcium phosphate are used. Design methods for the reconstruction of cranio-maxillofacial defects include the use of a pre-operative model printed with pre-operative data, printing a cutting guide or template after virtual surgery, a model after virtual surgery printed with reconstructed data using a mirror image, and manufacturing PSIs by directly obtaining PSI data after reconstruction using a mirror image. By selecting the appropriate design method, manufacturing process, and implant material according to the case, it is possible to obtain a more accurate surgical procedure, reduced operation time, the prevention of various complications that can occur using the traditional method, and predictive results compared to the traditional method.

  13. ChromaStarPy: A Stellar Atmosphere and Spectrum Modeling and Visualization Lab in Python

    NASA Astrophysics Data System (ADS)

    Short, C. Ian; Bayer, Jason H. T.; Burns, Lindsey M.

    2018-02-01

    We announce ChromaStarPy, an integrated general stellar atmospheric modeling and spectrum synthesis code written entirely in python V. 3. ChromaStarPy is a direct port of the ChromaStarServer (CSServ) Java modeling code described in earlier papers in this series, and many of the associated JavaScript (JS) post-processing procedures have been ported and incorporated into CSPy so that students have access to ready-made data products. A python integrated development environment (IDE) allows a student in a more advanced course to experiment with the code and to graphically visualize intermediate and final results, ad hoc, as they are running it. CSPy allows students and researchers to compare modeled to observed spectra in the same IDE in which they are processing observational data, while having complete control over the stellar parameters affecting the synthetic spectra. We also take the opportunity to describe improvements that have been made to the related codes, ChromaStar (CS), CSServ, and ChromaStarDB (CSDB), that, where relevant, have also been incorporated into CSPy. The application may be found at the home page of the OpenStars project: http://www.ap.smu.ca/OpenStars/.

  14. Design, fabrication and skin-electrode contact analysis of polymer microneedle-based ECG electrodes

    NASA Astrophysics Data System (ADS)

    O'Mahony, Conor; Grygoryev, Konstantin; Ciarlone, Antonio; Giannoni, Giuseppe; Kenthao, Anan; Galvin, Paul

    2016-08-01

    Microneedle-based ‘dry’ electrodes have immense potential for use in diagnostic procedures such as electrocardiography (ECG) analysis, as they eliminate several of the drawbacks associated with the conventional ‘wet’ electrodes currently used for physiological signal recording. To be commercially successful in such a competitive market, it is essential that dry electrodes are manufacturable in high volumes and at low cost. In addition, the topographical nature of these emerging devices means that electrode performance is likely to be highly dependent on the quality of the skin-electrode contact. This paper presents a low-cost, wafer-level micromoulding technology for the fabrication of polymeric ECG electrodes that use microneedle structures to make a direct electrical contact to the body. The double-sided moulding process can be used to eliminate post-process via creation and wafer dicing steps. In addition, measurement techniques have been developed to characterize the skin-electrode contact force. We perform the first analysis of signal-to-noise ratio dependency on contact force, and show that although microneedle-based electrodes can outperform conventional gel electrodes, the quality of ECG recordings is significantly dependent on temporal and mechanical aspects of the skin-electrode interface.

  15. Post-processing of a low-flow forecasting system in the Thur basin (Switzerland)

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Joerg-Hess, Stefanie; Bernhard, Luzi; Zappa, Massimiliano

    2015-04-01

    Low-flows and droughts are natural hazards with potentially severe impacts and economic loss or damage in a number of environmental and socio-economic sectors. As droughts develop slowly there is time to prepare and pre-empt some of these impacts. Real-time information and forecasting of a drought situation can therefore be an effective component of drought management. Although Switzerland has traditionally been more concerned with problems related to floods, in recent years some unprecedented low-flow situations have been experienced. Driven by the climate change debate a drought information platform has been developed to guide water resources management during situations where water resources drop below critical low-flow levels characterised by the indices duration (time between onset and offset), severity (cumulative water deficit) and magnitude (severity/duration). However to gain maximum benefit from such an information system it is essential to remove the bias from the meteorological forecast, to derive optimal estimates of the initial conditions, and to post-process the stream-flow forecasts. Quantile mapping methods for pre-processing the meteorological forecasts and improved data assimilation methods of snow measurements, which accounts for much of the seasonal stream-flow predictability for the majority of the basins in Switzerland, have been tested previously. The objective of this study is the testing of post-processing methods in order to remove bias and dispersion errors and to derive the predictive uncertainty of a calibrated low-flow forecast system. Therefore various stream-flow error correction methods with different degrees of complexity have been applied and combined with the Hydrological Uncertainty Processor (HUP) in order to minimise the differences between the observations and model predictions and to derive posterior probabilities. The complexity of the analysed error correction methods ranges from simple AR(1) models to methods including wavelet transformations and support vector machines. These methods have been combined with forecasts driven by Numerical Weather Prediction (NWP) systems with different temporal and spatial resolutions, lead-times and different numbers of ensembles covering short to medium to extended range forecasts (COSMO-LEPS, 10-15 days, monthly and seasonal ENS) as well as climatological forecasts. Additionally the suitability of various skill scores and efficiency measures regarding low-flow predictions will be tested. Amongst others the novel 2afc (2 alternatives forced choices) score and the quantile skill score and its decompositions will be applied to evaluate the probabilistic forecasts and the effects of post-processing. First results of the performance of the low-flow predictions of the hydrological model PREVAH initialised with different NWP's will be shown.

  16. a Method for the Positioning and Orientation of Rail-Bound Vehicles in Gnss-Free Environments

    NASA Astrophysics Data System (ADS)

    Hung, R.; King, B. A.; Chen, W.

    2016-06-01

    Mobile Mapping System (MMS) are increasingly applied for spatial data collection to support different fields because of their efficiencies and the levels of detail they can provide. The Position and Orientation System (POS), which is conventionally employed for locating and orienting MMS, allows direct georeferencing of spatial data in real-time. Since the performance of a POS depends on both the Inertial Navigation System (INS) and the Global Navigation Satellite System (GNSS), poor GNSS conditions, such as in long tunnels and underground, introduce the necessity for post-processing. In above-ground railways, mobile mapping technology is employed with high performance sensors for finite usage, which has considerable potential for enhancing railway safety and management in real-time. In contrast, underground railways present a challenge for a conventional POS thus alternative configurations are necessary to maintain data accuracy and alleviate the need for post-processing. This paper introduces a method of rail-bound navigation to replace the role of GNSS for railway applications. The proposed method integrates INS and track alignment data for environment-independent navigation and reduces the demand of post-processing. The principle of rail-bound navigation is presented and its performance is verified by an experiment using a consumer-grade Inertial Measurement Unit (IMU) and a small-scale railway model. The method produced a substantial improvement in position and orientation for a poorly initialised system in centimetre positional accuracy. The potential improvements indicated by, and limitations of rail-bound navigation are also considered for further development in existing railway systems.

  17. Research on the magnetorheological finishing of large aperture off-axis aspheric optical surfaces for zinc sulfide

    NASA Astrophysics Data System (ADS)

    Zhang, Yunfei; Huang, Wen; Zheng, Yongcheng; Ji, Fang; Xu, Min; Duan, Zhixin; Luo, Qing; Liu, Qian; Xiao, Hong

    2016-03-01

    Zinc sulfide is a kind of typical infrared optical material, commonly produced using single point diamond turning (SPDT). SPDT can efficiently produce zinc sulfide aspheric surfaces with micro-roughness and acceptable figure error. However the tool marks left by the diamond turning process cause high micro-roughness that degrades the optical performance when used in the visible region of the spectrum. Magnetorheological finishing (MRF) is a deterministic, sub-aperture polishing technology that is very helpful in improving both surface micro-roughness and surface figure.This paper mainly investigates the MRF technology of large aperture off-axis aspheric optical surfaces for zinc sulfide. The topological structure and coordinate transformation of a MRF machine tool PKC1200Q2 are analyzed and its kinematics is calculated, then the post-processing algorithm model of MRF for an optical lens is established. By taking the post-processing of off-axis aspheric surfacefor example, a post-processing algorithm that can be used for a raster tool path is deduced and the errors produced by the approximate treatment are analyzed. A polishing algorithm of trajectory planning and dwell time based on matrix equation and optimization theory is presented in this paper. Adopting this algorithm an experiment is performed to machining a large-aperture off-axis aspheric surface on the MRF machine developed by ourselves. After several times' polishing, the figure accuracy PV is proved from 3.3λ to 2.0λ and RMS from 0.451λ to 0.327λ. This algorithm is used to polish the other shapes including spheres, aspheres and prisms.

  18. Automatic small target detection in synthetic infrared images

    NASA Astrophysics Data System (ADS)

    Yardımcı, Ozan; Ulusoy, Ä.°lkay

    2017-05-01

    Automatic detection of targets from far distances is a very challenging problem. Background clutter and small target size are the main difficulties which should be solved while reaching a high detection performance as well as a low computational load. The pre-processing, detection and post-processing approaches are very effective on the final results. In this study, first of all, various methods in the literature were evaluated separately for each of these stages using the simulated test scenarios. Then, a full system of detection was constructed among available solutions which resulted in the best performance in terms of detection. However, although a precision rate as 100% was reached, the recall values stayed low around 25-45%. Finally, a post-processing method was proposed which increased the recall value while keeping the precision at 100%. The proposed post-processing method, which is based on local operations, increased the recall value to 65-95% in all test scenarios.

  19. Capacitive Micro Pressure Sensor Integrated with a Ring Oscillator Circuit on Chip

    PubMed Central

    Dai, Ching-Liang; Lu, Po-Wei; Chang, Chienliu; Liu, Cheng-Yang

    2009-01-01

    The study investigates a capacitive micro pressure sensor integrated with a ring oscillator circuit on a chip. The integrated capacitive pressure sensor is fabricated using the commercial CMOS (complementary metal oxide semiconductor) process and a post-process. The ring oscillator is employed to convert the capacitance of the pressure sensor into the frequency output. The pressure sensor consists of 16 sensing cells in parallel. Each sensing cell contains a top electrode and a lower electrode, and the top electrode is a sandwich membrane. The pressure sensor needs a post-CMOS process to release the membranes after completion of the CMOS process. The post-process uses etchants to etch the sacrificial layers, and to release the membranes. The advantages of the post-process include easy execution and low cost. Experimental results reveal that the pressure sensor has a high sensitivity of 7 Hz/Pa in the pressure range of 0–300 kPa. PMID:22303167

  20. Capacitive micro pressure sensor integrated with a ring oscillator circuit on chip.

    PubMed

    Dai, Ching-Liang; Lu, Po-Wei; Chang, Chienliu; Liu, Cheng-Yang

    2009-01-01

    The study investigates a capacitive micro pressure sensor integrated with a ring oscillator circuit on a chip. The integrated capacitive pressure sensor is fabricated using the commercial CMOS (complementary metal oxide semiconductor) process and a post-process. The ring oscillator is employed to convert the capacitance of the pressure sensor into the frequency output. The pressure sensor consists of 16 sensing cells in parallel. Each sensing cell contains a top electrode and a lower electrode, and the top electrode is a sandwich membrane. The pressure sensor needs a post-CMOS process to release the membranes after completion of the CMOS process. The post-process uses etchants to etch the sacrificial layers, and to release the membranes. The advantages of the post-process include easy execution and low cost. Experimental results reveal that the pressure sensor has a high sensitivity of 7 Hz/Pa in the pressure range of 0-300 kPa.

  1. Lagrangian postprocessing of computational hemodynamics.

    PubMed

    Shadden, Shawn C; Arzani, Amirhossein

    2015-01-01

    Recent advances in imaging, modeling, and computing have rapidly expanded our capabilities to model hemodynamics in the large vessels (heart, arteries, and veins). This data encodes a wealth of information that is often under-utilized. Modeling (and measuring) blood flow in the large vessels typically amounts to solving for the time-varying velocity field in a region of interest. Flow in the heart and larger arteries is often complex, and velocity field data provides a starting point for investigating the hemodynamics. This data can be used to perform Lagrangian particle tracking, and other Lagrangian-based postprocessing. As described herein, Lagrangian methods are necessary to understand inherently transient hemodynamic conditions from the fluid mechanics perspective, and to properly understand the biomechanical factors that lead to acute and gradual changes of vascular function and health. The goal of the present paper is to review Lagrangian methods that have been used in post-processing velocity data of cardiovascular flows.

  2. The design and implementation of postprocessing for depth map on real-time extraction system.

    PubMed

    Tang, Zhiwei; Li, Bin; Li, Huosheng; Xu, Zheng

    2014-01-01

    Depth estimation becomes the key technology to resolve the communications of the stereo vision. We can get the real-time depth map based on hardware, which cannot implement complicated algorithm as software, because there are some restrictions in the hardware structure. Eventually, some wrong stereo matching will inevitably exist in the process of depth estimation by hardware, such as FPGA. In order to solve the problem a postprocessing function is designed in this paper. After matching cost unique test, the both left-right and right-left consistency check solutions are implemented, respectively; then, the cavities in depth maps can be filled by right depth values on the basis of right-left consistency check solution. The results in the experiments have shown that the depth map extraction and postprocessing function can be implemented in real time in the same system; what is more, the quality of the depth maps is satisfactory.

  3. Lagrangian postprocessing of computational hemodynamics

    PubMed Central

    Shadden, Shawn C.; Arzani, Amirhossein

    2014-01-01

    Recent advances in imaging, modeling and computing have rapidly expanded our capabilities to model hemodynamics in the large vessels (heart, arteries and veins). This data encodes a wealth of information that is often under-utilized. Modeling (and measuring) blood flow in the large vessels typically amounts to solving for the time-varying velocity field in a region of interest. Flow in the heart and larger arteries is often complex, and velocity field data provides a starting point for investigating the hemodynamics. This data can be used to perform Lagrangian particle tracking, and other Lagrangian-based postprocessing. As described herein, Lagrangian methods are necessary to understand inherently transient hemodynamic conditions from the fluid mechanics perspective, and to properly understand the biomechanical factors that lead to acute and gradual changes of vascular function and health. The goal of the present paper is to review Lagrangian methods that have been used in post-processing velocity data of cardiovascular flows. PMID:25059889

  4. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons

    PubMed Central

    2012-01-01

    Background Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. Results We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. Conclusions LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR retrotransposons up to the stage of preparing full-length reference sequence libraries. The LTRsift software is freely available at http://www.zbh.uni-hamburg.de/LTRsift under an open-source license. PMID:23131050

  5. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons.

    PubMed

    Steinbiss, Sascha; Kastens, Sascha; Kurtz, Stefan

    2012-11-07

    Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR retrotransposons up to the stage of preparing full-length reference sequence libraries. The LTRsift software is freely available at http://www.zbh.uni-hamburg.de/LTRsift under an open-source license.

  6. [Design and development of the DSA digital subtraction workstation].

    PubMed

    Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo

    2008-05-01

    According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.

  7. Development of a Vision-Based Particle Tracking Velocimetry Method and Post-Processing of Scattered Velocity Data

    DTIC Science & Technology

    2012-01-01

    the performance of the VB-PTV algorithm. Particle yield is changed subtly from the definition above and defined as the number of matches made over the...methods have been developed for use in fields as varied as cosmology (Bernardeau and van de Weygaert, 1996; Schaap and van de Weygaert, 2000) and...becomes a long list of equations and variable definitions , interested readers are referred to Gunes et al., (2006); Sacks et al., (1989); and Lophaven

  8. Development of a Low-Latency, High Data Rate, Differential GPS Relative Positioning System for UAV Formation Flight Control

    DTIC Science & Technology

    2006-09-01

    spiral development cycle involved transporting the software processes from a Windows XP / MATLAB environment to a Linux / C++ environment. This...tested on. Additionally, in the case of the GUMSTIX PC boards, the LINUX operating system is burned into the read-only memory. Lastly, both PC-104 and...both the real-time environment and the post-processed en - vironment. When the system operates in real-time mode, an output file is generated which

  9. Interventional MR: vascular applications.

    PubMed

    Smits, H F; Bos, C; van der Weide, R; Bakker, C J

    1999-01-01

    Three strategies for visualisation of MR-dedicated guidewires and catheters have been proposed, namely active tracking, the technique of locally induced field inhomogeneity and passive susceptibility-based tracking. In this article the pros and cons of these techniques are discussed, including the development of MR-dedicated guidewires and catheters, scan techniques, post-processing tools, and display facilities for MR tracking. Finally, some of the results obtained with MR tracking are discussed.

  10. EXODUS II: A finite element data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  11. Experimental investigation of the structural behavior of equine urethra.

    PubMed

    Natali, Arturo Nicola; Carniel, Emanuele Luigi; Frigo, Alessandro; Fontanella, Chiara Giulia; Rubini, Alessandro; Avital, Yochai; De Benedictis, Giulia Maria

    2017-04-01

    An integrated experimental and computational investigation was developed aiming to provide a methodology for characterizing the structural response of the urethral duct. The investigation provides information that are suitable for the actual comprehension of lower urinary tract mechanical functionality and the optimal design of prosthetic devices. Experimental activity entailed the execution of inflation tests performed on segments of horse penile urethras from both proximal and distal regions. Inflation tests were developed imposing different volumes. Each test was performed according to a two-step procedure. The tubular segment was inflated almost instantaneously during the first step, while volume was held constant for about 300s to allow the development of relaxation processes during the second step. Tests performed on the same specimen were interspersed by 600s of rest to allow the recovery of the specimen mechanical condition. Results from experimental activities were statistically analyzed and processed by means of a specific mechanical model. Such computational model was developed with the purpose of interpreting the general pressure-volume-time response of biologic tubular structures. The model includes parameters that interpret the elastic and viscous behavior of hollow structures, directly correlated with the results from the experimental activities. Post-processing of experimental data provided information about the non-linear elastic and time-dependent behavior of the urethral duct. In detail, statistically representative pressure-volume and pressure relaxation curves were identified, and summarized by structural parameters. Considering elastic properties, initial stiffness ranged between 0.677 ± 0.026kPa and 0.262 ± 0.006kPa moving from proximal to distal region of penile urethra. Viscous parameters showed typical values of soft biological tissues, as τ 1 =0.153±0.018s, τ 2 =17.458 ± 1.644s and τ 1 =0.201 ± 0.085, τ 2 = 8.514 ± 1.379s for proximal and distal regions respectively. A general procedure for the mechanical characterization of the urethral duct has been provided. The proposed methodology allows identifying mechanical parameters that properly express the mechanical behavior of the biological tube. The approach is especially suitable for evaluating the influence of degenerative phenomena on the lower urinary tract mechanical functionality. The information are mandatory for the optimal design of potential surgical procedures and devices. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Comparative Monte Carlo study on the performance of integration- and list-mode detector configurations for carbon ion computed tomography

    NASA Astrophysics Data System (ADS)

    Meyer, Sebastian; Gianoli, Chiara; Magallanes, Lorena; Kopp, Benedikt; Tessonnier, Thomas; Landry, Guillaume; Dedes, George; Voss, Bernd; Parodi, Katia

    2017-02-01

    Ion beam therapy offers the possibility of a highly conformal tumor-dose distribution; however, this technique is extremely sensitive to inaccuracies in the treatment procedures. Ambiguities in the conversion of Hounsfield units of the treatment planning x-ray CT to relative stopping power (RSP) can cause uncertainties in the estimated ion range of up to several millimeters. Ion CT (iCT) represents a favorable solution allowing to directly assess the RSP. In this simulation study we investigate the performance of the integration-mode configuration for carbon iCT, in comparison with a single-particle approach under the same set-up. The experimental detector consists of a stack of 61 air-filled parallel-plate ionization chambers, interleaved with 3 mm thick PMMA absorbers. By means of Monte Carlo simulations, this design was applied to acquire iCTs of phantoms of tissue-equivalent materials. An optimization of the acquisition parameters was performed to reduce the dose exposure, and the implications of a reduced absorber thickness were assessed. In order to overcome limitations of integration-mode detection in the presence of lateral tissue heterogeneities a dedicated post-processing method using a linear decomposition of the detector signal was developed and its performance was compared to the list-mode acquisition. For the current set-up, the phantom dose could be reduced to below 30 mGy with only minor image quality degradation. By using the decomposition method a correct identification of the components and a RSP accuracy improvement of around 2.0% was obtained. The comparison of integration- and list-mode indicated a slightly better image quality of the latter, with an average median RSP error below 1.8% and 1.0%, respectively. With a decreased absorber thickness a reduced RSP error was observed. Overall, these findings support the potential of iCT for low dose RSP estimation, showing that integration-mode detectors with dedicated post-processing strategies can provide a RSP accuracy comparable to list-mode configurations.

  13. Interventional spinal procedures guided and controlled by a 3D rotational angiographic unit.

    PubMed

    Pedicelli, Alessandro; Verdolotti, Tommaso; Pompucci, Angelo; Desiderio, Flora; D'Argento, Francesco; Colosimo, Cesare; Bonomo, Lorenzo

    2011-12-01

    The aim of this paper is to demonstrate the usefulness of 2D multiplanar reformatting images (MPR) obtained from rotational acquisitions with cone-beam computed tomography technology during percutaneous extra-vascular spinal procedures performed in the angiography suite. We used a 3D rotational angiographic unit with a flat panel detector. MPR images were obtained from a rotational acquisition of 8 s (240 images at 30 fps), tube rotation of 180° and after post-processing of 5 s by a local work-station. Multislice CT (MSCT) is the best guidance system for spinal approaches permitting direct tomographic visualization of each spinal structure. Many operators, however, are trained with fluoroscopy, it is less expensive, allows real-time guidance, and in many centers the angiography suite is more frequently available for percutaneous procedures. We present our 6-year experience in fluoroscopy-guided spinal procedures, which were performed under different conditions using MPR images. We illustrate cases of vertebroplasty, epidural injections, selective foraminal nerve root block, facet block, percutaneous treatment of disc herniation and spine biopsy, all performed with the help of MPR images for guidance and control in the event of difficult or anatomically complex access. The integrated use of "CT-like" MPR images allows the execution of spinal procedures under fluoroscopy guidance alone in all cases of dorso-lumbar access, with evident limitation of risks and complications, and without need for recourse to MSCT guidance, thus eliminating CT-room time (often bearing high diagnostic charges), and avoiding organizational problems for procedures that need, for example, combined use of a C-arm in the CT room.

  14. The role of ensemble post-processing for modeling the ensemble tail

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  15. Greenville Bridge Reach, Bendway Weirs

    DTIC Science & Technology

    2006-09-01

    However, these receivers are more expensive and heavier due to the radio and batteries. For this study, two Magellan GPS ProMARK X-CP receivers were...used to collect float data. The Magellan GPS ProMARK X-CP is a small, robust, light receiver that can log 9 hr of both pseudorange and car- rier phase...require a high degree of accu- racy. Using post-processing software, pseudorange GPS data recorded by the ProMARK X-CP can be post-processed

  16. A new image enhancement algorithm with applications to forestry stand mapping

    NASA Technical Reports Server (NTRS)

    Kan, E. P. F. (Principal Investigator); Lo, J. K.

    1975-01-01

    The author has identified the following significant results. Results show that the new algorithm produced cleaner classification maps in which holes of small predesignated sizes were eliminated and significant boundary information was preserved. These cleaner post-processed maps better resemble true life timber stand maps and are thus more usable products than the pre-post-processing ones: Compared to an accepted neighbor-checking post-processing technique, the new algorithm is more appropriate for timber stand mapping.

  17. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  18. Postprocessing for character recognition using pattern features and linguistic information

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi

    1993-04-01

    We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).

  19. Optimization of diffusion-weighted single-refocused spin-echo EPI by reducing eddy-current artifacts and shortening the echo time.

    PubMed

    Shrestha, Manoj; Hok, Pavel; Nöth, Ulrike; Lienerth, Bianca; Deichmann, Ralf

    2018-03-30

    The purpose of this work was to optimize the acquisition of diffusion-weighted (DW) single-refocused spin-echo (srSE) data without intrinsic eddy-current compensation (ECC) for an improved performance of ECC postprocessing. The rationale is that srSE sequences without ECC may yield shorter echo times (TE) and thus higher signal-to-noise ratios (SNR) than srSE or twice-refocused spin-echo (trSE) schemes with intrinsic ECC. The proposed method employs dummy scans with DW gradients to drive eddy currents into a steady state before data acquisition. Parameters of the ECC postprocessing algorithm were also optimized. Simulations were performed to obtain minimum TE values for the proposed sequence and sequences with intrinsic ECC. Experimentally, the proposed method was compared with standard DW-trSE imaging, both in vitro and in vivo. Simulations showed substantially shorter TE for the proposed method than for methods with intrinsic ECC when using shortened echo readouts. Data of the proposed method showed a marked increase in SNR. A dummy scan duration of at least 1.5 s improved performance of the ECC postprocessing algorithm. Changes proposed for the DW-srSE sequence and for the parameter setting of the postprocessing ECC algorithm considerably reduced eddy-current artifacts and provided a higher SNR.

  20. Polymer:fullerene solar cells: materials, processing issues, and cell layouts to reach power conversion efficiency over 10%, a review

    NASA Astrophysics Data System (ADS)

    Etxebarria, Ikerne; Ajuria, Jon; Pacios, Roberto

    2015-01-01

    In spite of the impressive development achieved by organic photovoltaics throughout the last decades, especially in terms of reported power conversion efficiencies, there are still important technological and fundamental obstacles to circumvent before they can be implemented into reliable and long-lasting applications. Regarding device processing, the synthesis of highly soluble polymeric semiconductors first, and then fullerene derivatives, was initially considered as an important breakthrough that would definitely change the fabrication of photovoltaics once and for all. The potential and the expectation raised by this technology is such that it is very difficult to keep track of the most significant progresses being now published in different and even monographic journals. In this paper, we review the development of polymeric solar cells from its origin to the most efficient devices published to date. We separate these achievements into three different categories traditionally followed by the scientific community to push devices over 10% power conversion efficiency: active materials, strategies-fabrication/processing procedures-that can mainly modify the active film morphology, and all the different cell layout/architectures that have been used in order to extract as high a photocurrent as possible from the Sun. The synthesis of new donors, the use of additives and postprocessing techniques, buffer interlayers, inverted and tandem designs are some of the most important aspects that are reviewed in detail in this paper. All have equally contributed to develop this technology and bring it at the doors of commercialization.

  1. Methods of practice and guidelines for using survey-grade global navigation satellite systems (GNSS) to establish vertical datum in the United States Geological Survey

    USGS Publications Warehouse

    Rydlund, Jr., Paul H.; Densmore, Brenda K.

    2012-01-01

    Geodetic surveys have evolved through the years to the use of survey-grade (centimeter level) global positioning to perpetuate and post-process vertical datum. The U.S. Geological Survey (USGS) uses Global Navigation Satellite Systems (GNSS) technology to monitor natural hazards, ensure geospatial control for climate and land use change, and gather data necessary for investigative studies related to water, the environment, energy, and ecosystems. Vertical datum is fundamental to a variety of these integrated earth sciences. Essentially GNSS surveys provide a three-dimensional position x, y, and z as a function of the North American Datum of 1983 ellipsoid and the most current hybrid geoid model. A GNSS survey may be approached with post-processed positioning for static observations related to a single point or network, or involve real-time corrections to provide positioning "on-the-fly." Field equipment required to facilitate GNSS surveys range from a single receiver, with a power source for static positioning, to an additional receiver or network communicated by radio or cellular for real-time positioning. A real-time approach in its most common form may be described as a roving receiver augmented by a single-base station receiver, known as a single-base real-time (RT) survey. More efficient real-time methods involving a Real-Time Network (RTN) permit the use of only one roving receiver that is augmented to a network of fixed receivers commonly known as Continually Operating Reference Stations (CORS). A post-processed approach in its most common form involves static data collection at a single point. Data are most commonly post-processed through a universally accepted utility maintained by the National Geodetic Survey (NGS), known as the Online Position User Service (OPUS). More complex post-processed methods involve static observations among a network of additional receivers collecting static data at known benchmarks. Both classifications provide users flexibility regarding efficiency and quality of data collection. Quality assurance of survey-grade global positioning is often overlooked or not understood and perceived uncertainties can be misleading. GNSS users can benefit from a blueprint of data collection standards used to ensure consistency among USGS mission areas. A classification of GNSS survey qualities provide the user with the ability to choose from the highest quality survey used to establish objective points with low uncertainties, identified as a Level I, to a GNSS survey for general topographic control without quality assurance, identified as a Level IV. A Level I survey is strictly limited to post-processed methods, whereas Level II, Level III, and Level IV surveys integrate variations of a RT approach. Among these classifications, techniques involving blunder checks and redundancy are important, and planning that involves the assessment of the overall satellite configuration, as well as terrestrial and space weather, are necessary to ensure an efficient and quality campaign. Although quality indicators and uncertainties are identified in post-processed methods using CORS, the accuracy of a GNSS survey is most effectively expressed as a comparison to a local benchmark that has a high degree of confidence. Real-time and post-processed methods should incorporate these "trusted" benchmarks as a check during any campaign. Global positioning surveys are expected to change rapidly in the future. The expansion of continuously operating reference stations, combined with newly available satellite signals, and enhancements to the conterminous geoid, are all sufficient indicators for substantial growth in real-time positioning and quality thereof.

  2. A Wearable Real-Time and Non-Invasive Thoracic Cavity Monitoring System

    NASA Astrophysics Data System (ADS)

    Salman, Safa

    A surgery-free on-body monitoring system is proposed to evaluate the dielectric constant of internal body tissues (especially lung and heart) and effectively determine irregularities in real-time. The proposed surgery-free on-body monitoring system includes a sensor, a post-processing technique, and an automated data collection circuit. Data are automatically collected from the sensor electrodes and then post processed to extract the electrical properties of the underlying biological tissue(s). To demonstrate the imaging concept, planar and wrap-around sensors are devised. These sensors are designed to detect changes in the dielectric constant of inner tissues (lung and heart). The planar sensor focuses on a single organ while the wrap-around sensors allows for imaging of the thoracic cavity's cross section. Moreover, post-processing techniques are proposed to complement sensors for a more complete on-body monitoring system. The idea behind the post-processing technique is to suppress interference from the outer layers (skin, fat, muscle, and bone). The sensors and post-processing techniques yield high signal (from the inner layers) to noise (from the outer layers) ratio. Additionally, data collection circuits are proposed for a more robust and stand-alone system. The circuit design aims to sequentially activate each port of the sensor and portions of the propagating signal are to be received at all passive ports in the form of a voltage at the probes. The voltages are converted to scattering parameters which are then used in the post-processing technique to obtain epsilonr. The concept of wearability is also considered through the use of electrically conductive fibers (E-fibers). These fibers show matching performance to that of copper, especially at low frequencies making them a viable substitute. For the cases considered, the proposed sensors show promising results in recovering the permittivity of deep tissues with a maximum error of 13.5%. These sensors provide a way for a new class of medical sensors through accuracy improvements and avoidance of inverse scattering techniques.

  3. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dahle, Christoph; Neumayer, Karl-Hans; Dobslaw, Henryk; Flechtner, Frank; Thomas, Maik

    2016-04-01

    Terrestrial water storage (TWS) variations obtained from GRACE play an increasingly important role in various hydrological and hydro-meteorological applications. Since monthly-mean gravity fields are contaminated by errors caused by a number of sources with distinct spatial correlation structures, filtering is needed to remove in particular high frequency noise. Subsequently, bias and leakage caused by the filtering need to be corrected before the final results are interpreted as GRACE-based observations of TWS. Knowledge about the reliability and performance of different post-processing methods is highly important for the GRACE users. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-like gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Two non-isotropic filter methods from Kusche (2007) and Swenson and Wahr (2006) are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-like TWS estimates to correct the bias and leakage. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment and will subsequently recommend a processing strategy that shall also be applied to planned GRACE and GRACE-FO Level-3 products for hydrological applications provided by GFZ. Kusche, J. (2007): Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Swenson, S. and Wahr, J. (2006): Post-processing removal of correlated errors in GRACE data. Geophysical Research Letters, 33(8):L08402.

  4. Statistical Post-Processing of Wind Speed Forecasts to Estimate Relative Economic Value

    NASA Astrophysics Data System (ADS)

    Courtney, Jennifer; Lynch, Peter; Sweeney, Conor

    2013-04-01

    The objective of this research is to get the best possible wind speed forecasts for the wind energy industry by using an optimal combination of well-established forecasting and post-processing methods. We start with the ECMWF 51 member ensemble prediction system (EPS) which is underdispersive and hence uncalibrated. We aim to produce wind speed forecasts that are more accurate and calibrated than the EPS. The 51 members of the EPS are clustered to 8 weighted representative members (RMs), chosen to minimize the within-cluster spread, while maximizing the inter-cluster spread. The forecasts are then downscaled using two limited area models, WRF and COSMO, at two resolutions, 14km and 3km. This process creates four distinguishable ensembles which are used as input to statistical post-processes requiring multi-model forecasts. Two such processes are presented here. The first, Bayesian Model Averaging, has been proven to provide more calibrated and accurate wind speed forecasts than the ECMWF EPS using this multi-model input data. The second, heteroscedastic censored regression is indicating positive results also. We compare the two post-processing methods, applied to a year of hindcast wind speed data around Ireland, using an array of deterministic and probabilistic verification techniques, such as MAE, CRPS, probability transform integrals and verification rank histograms, to show which method provides the most accurate and calibrated forecasts. However, the value of a forecast to an end-user cannot be fully quantified by just the accuracy and calibration measurements mentioned, as the relationship between skill and value is complex. Capturing the full potential of the forecast benefits also requires detailed knowledge of the end-users' weather sensitive decision-making processes and most importantly the economic impact it will have on their income. Finally, we present the continuous relative economic value of both post-processing methods to identify which is more beneficial to the wind energy industry of Ireland.

  5. Direct generation of all-optical random numbers from optical pulse amplitude chaos.

    PubMed

    Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong

    2012-02-13

    We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.

  6. Generating grid states from Schrödinger-cat states without postselection

    NASA Astrophysics Data System (ADS)

    Weigand, Daniel J.; Terhal, Barbara M.

    2018-02-01

    Grid (or comb) states are an interesting class of bosonic states introduced by Gottesman, Kitaev, and Preskill [D. Gottesman, A. Kitaev, and J. Preskill, Phys. Rev. A 64, 012310 (2001), 10.1103/PhysRevA.64.012310] to encode a qubit into an oscillator. A method to generate or "breed" a grid state from Schrödinger cat states using beam splitters and homodyne measurements is known [H. M. Vasconcelos, L. Sanz, and S. Glancy, Opt. Lett. 35, 3261 (2010), 10.1364/OL.35.003261], but this method requires postselection. In this paper we show how postprocessing of the measurement data can be used to entirely remove the need for postselection, making the scheme much more viable. We bound the asymptotic behavior of the breeding procedure and demonstrate the efficacy of the method numerically.

  7. Residual stress measurements via neutron diffraction of additive manufactured stainless steel 17-4 PH.

    PubMed

    Masoomi, Mohammad; Shamsaei, Nima; Winholtz, Robert A; Milner, Justin L; Gnäupel-Herold, Thomas; Elwany, Alaa; Mahmoudi, Mohamad; Thompson, Scott M

    2017-08-01

    Neutron diffraction was employed to measure internal residual stresses at various locations along stainless steel (SS) 17-4 PH specimens additively manufactured via laser-powder bed fusion (L-PBF). Of these specimens, two were rods (diameter=8 mm, length=80 mm) built vertically upward and one a parallelepiped (8×80×9 mm 3 ) built with its longest edge parallel to ground. One rod and the parallelepiped were left in their as-built condition, while the other rod was heat treated. Data presented provide insight into the microstructural characteristics of typical L-PBF SS 17-4 PH specimens and their dependence on build orientation and post-processing procedures such as heat treatment. Data have been deposited in the Data in Brief Dataverse repository (doi:10.7910/DVN/T41S3V).

  8. The VLITE Post-Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.

    2018-01-01

    A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE; ) on the Karl G. Jansky Very Large Array (VLA). In contrast to other radio sky surveys, the commensal observing mode of VLITE results in varying depths, sensitivities, and spatial resolutions across the sky based on the configuration of the VLA, location on the sky, and time on source specified by the primary observer for their independent science objectives. Therefore, previously developed tools and methods for generating source catalogs and survey statistics are not always appropriate for VLITE's diverse and growing set of data. A raw catalog of point sources extracted from every VLITE image will be created from source fit parameters stored in a queryable database. Point sources will be measured using the Python Blob Detector and Source Finder software (PyBDSF; Mohan & Rafferty 2015). Sources in the raw catalog will be associated with previous VLITE detections in a resolution- and sensitivity-dependent manner, and cross-matched to other radio sky surveys to aid in the detection of transient and variable sources. Final data products will include separate, tiered point source catalogs grouped by sensitivity limit and spatial resolution.

  9. Non-uniformly weighted sampling for faster localized two-dimensional correlated spectroscopy of the brain in vivo

    NASA Astrophysics Data System (ADS)

    Verma, Gaurav; Chawla, Sanjeev; Nagarajan, Rajakumar; Iqbal, Zohaib; Albert Thomas, M.; Poptani, Harish

    2017-04-01

    Two-dimensional localized correlated spectroscopy (2D L-COSY) offers greater spectral dispersion than conventional one-dimensional (1D) MRS techniques, yet long acquisition times and limited post-processing support have slowed its clinical adoption. Improving acquisition efficiency and developing versatile post-processing techniques can bolster the clinical viability of 2D MRS. The purpose of this study was to implement a non-uniformly weighted sampling (NUWS) scheme for faster acquisition of 2D-MRS. A NUWS 2D L-COSY sequence was developed for 7T whole-body MRI. A phantom containing metabolites commonly observed in the brain at physiological concentrations was scanned ten times with both the NUWS scheme of 12:48 duration and a 17:04 constant eight-average sequence using a 32-channel head coil. 2D L-COSY spectra were also acquired from the occipital lobe of four healthy volunteers using both the proposed NUWS and the conventional uniformly-averaged L-COSY sequence. The NUWS 2D L-COSY sequence facilitated 25% shorter acquisition time while maintaining comparable SNR in humans (+0.3%) and phantom studies (+6.0%) compared to uniform averaging. NUWS schemes successfully demonstrated improved efficiency of L-COSY, by facilitating a reduction in scan time without affecting signal quality.

  10. Video enhancement method with color-protection post-processing

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Kwak, Youngshin

    2015-01-01

    The current study is aimed to propose a post-processing method for video enhancement by adopting a color-protection technique. The color-protection intends to attenuate perceptible artifacts due to over-enhancements in visually sensitive image regions such as low-chroma colors, including skin and gray objects. In addition, reducing the loss in color texture caused by the out-of-color-gamut signals is also taken into account. Consequently, color reproducibility of video sequences could be remarkably enhanced while the undesirable visual exaggerations are minimized.

  11. Development of a Post-Processing Algorithm for Accurate Human Skull Profile Extraction via Ultrasonic Phased Arrays

    NASA Astrophysics Data System (ADS)

    Al-Ansary, Mariam Luay Y.

    Ultrasound Imaging has been favored by clinicians for its safety, affordability, accessibility, and speed compared to other imaging modalities. However, the trade-offs to these benefits are a relatively lower image quality and interpretability, which can be addressed by, for example, post-processing methods. One particularly difficult imaging case is associated with the presence of a barrier, such as a human skull, with significantly different acoustical properties than the brain tissue as the target medium. Some methods were proposed in the literature to account for this structure if the skull's geometry is known. Measuring the skull's geometry is therefore an important task that requires attention. In this work, a new edge detection method for accurate human skull profile extraction via post-processing of ultrasonic A-Scans is introduced. This method, referred to as the Selective Echo Extraction algorithm, SEE, processes each A-Scan separately and determines the outermost and innermost boundaries of the skull by means of adaptive filtering. The method can also be used to determine the average attenuation coefficient of the skull. When applied to simulated B-Mode images of the skull profile, promising results were obtained. The profiles obtained from the proposed process in simulations were found to be within 0.15lambda +/- 0.11lambda or 0.09 +/- 0.07mm from the actual profiles. Experiments were also performed to test SEE on skull mimicking phantoms with major acoustical properties similar to those of the actual human skull. With experimental data, the profiles obtained with the proposed process were within 0.32lambda +/- 0.25lambda or 0.19 +/- 0.15mm from the actual profile.

  12. How well can online GPS PPP post-processing services be used to establish geodetic survey control networks?

    NASA Astrophysics Data System (ADS)

    Ebner, R.; Featherstone, W. E.

    2008-09-01

    Establishing geodetic control networks for subsequent surveys can be a costly business, even when using GPS. Multiple stations should be occupied simultaneously and post-processed with scientific software. However, the free availability of online GPS precise point positioning (PPP) post-processing services offer the opportunity to establish a whole geodetic control network with just one dual-frequency receiver and one field crew. To test this idea, we compared coordinates from a moderate-sized (~550 km by ~440 km) geodetic network of 46 points over part of south-western Western Australia, which were processed both with the Bernese v5 scientific software and with the CSRS (Canadian Spatial Reference System) PPP free online service. After rejection of five stations where the antenna type was not recognised by CSRS, the PPP solutions agreed on average with the Bernese solutions to 3.3 mm in east, 4.8 mm in north and 11.8 mm in height. The average standard deviations of the Bernese solutions were 1.0 mm in east, 1.2 mm in north and 6.2 mm in height, whereas for CSRS they were 3.9 mm in east, 1.9 mm in north and 7.8 mm in height, reflecting the inherently lower precision of PPP. However, at the 99% confidence level, only one CSRS solution was statistically different to the Bernese solution in the north component, due to a data interruption at that site. Nevertheless, PPP can still be used to establish geodetic survey control, albeit with a slightly lower quality because of the larger standard deviations. This approach may be of particular benefit in developing countries or remote regions, where geodetic infrastructure is sparse and would not normally be established without this approach.

  13. Knowledge-guided golf course detection using a convolutional neural network fine-tuned on temporally augmented data

    NASA Astrophysics Data System (ADS)

    Chen, Jingbo; Wang, Chengyi; Yue, Anzhi; Chen, Jiansheng; He, Dongxu; Zhang, Xiuyan

    2017-10-01

    The tremendous success of deep learning models such as convolutional neural networks (CNNs) in computer vision provides a method for similar problems in the field of remote sensing. Although research on repurposing pretrained CNN to remote sensing tasks is emerging, the scarcity of labeled samples and the complexity of remote sensing imagery still pose challenges. We developed a knowledge-guided golf course detection approach using a CNN fine-tuned on temporally augmented data. The proposed approach is a combination of knowledge-driven region proposal, data-driven detection based on CNN, and knowledge-driven postprocessing. To confront data complexity, knowledge-derived cooccurrence, composition, and area-based rules are applied sequentially to propose candidate golf regions. To confront sample scarcity, we employed data augmentation in the temporal domain, which extracts samples from multitemporal images. The augmented samples were then used to fine-tune a pretrained CNN for golf detection. Finally, commission error was further suppressed by postprocessing. Experiments conducted on GF-1 imagery prove the effectiveness of the proposed approach.

  14. National Water Model: Providing the Nation with Actionable Water Intelligence

    NASA Astrophysics Data System (ADS)

    Aggett, G. R.; Bates, B.

    2017-12-01

    The National Water Model (NWM) provides national, street-level detail of water movement through time and space. Operating hourly, this flood of information offers enormous benefits in the form of water resource management, natural disaster preparedness, and the protection of life and property. The Geo-Intelligence Division at the NOAA National Water Center supplies forecasters and decision-makers with timely, actionable water intelligence through the processing of billions of NWM data points every hour. These datasets include current streamflow estimates, short and medium range streamflow forecasts, and many other ancillary datasets. The sheer amount of NWM data produced yields a dataset too large to allow for direct human comprehension. As such, it is necessary to undergo model data post-processing, filtering, and data ingestion by visualization web apps that make use of cartographic techniques to bring attention to the areas of highest urgency. This poster illustrates NWM output post-processing and cartographic visualization techniques being developed and employed by the Geo-Intelligence Division at the NOAA National Water Center to provide national actionable water intelligence.

  15. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  16. Studying effects of non-equilibrium radiative transfer via HPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holladay, Daniel

    This report presents slides on Ph.D. Research Goals; Local Thermodynamic Equilibrium (LTE) Implications; Calculating an Opacity; Opacity: Pictographic Representation; Opacity: Pictographic Representation; Opacity: Pictographic Representation; Collisional Radiative Modeling; Radiative and Collisional Excitation; Photo and Electron Impact Ionization; Autoionization; The Rate Matrix; Example: Total Photoionization rate; The Rate Coefficients; inlinlte version 1.1; inlinlte: Verification; New capabilities: Rate Matrix – Flexibility; Memory Option Comparison; Improvements over previous DCA solver; Inter- and intra-node load balancing; Load Balance – Full Picture; Load Balance – Full Picture; Load Balance – Internode; Load Balance – Scaling; Description; Performance; xRAGE Simulation; Post-process @ 2hr; Post-process @ 4hr;more » Post-process @ 8hr; Takeaways; Performance for 1 realization; Motivation for QOI; Multigroup Er; Transport and NLTE large effects (1mm, 1keV); Transport large effect, NLTE lesser (1mm, 750eV); Blastwave Diagnostici – Description & Performance; Temperature Comparison; NLTE has effect on dynamics at wall; NLTE has lesser effect in the foam; Global Takeaways; The end.« less

  17. A Novel Approach to Enhance the Mechanical Strength and Electrical and Thermal Conductivity of Cu-GNP Nanocomposites

    NASA Astrophysics Data System (ADS)

    Saboori, Abdollah; Pavese, Matteo; Badini, Claudio; Fino, Paolo

    2018-01-01

    Copper/graphene nanoplatelet (GNP) nanocomposites were produced by a wet mixing method followed by a classical powder metallurgy technique. A qualitative evaluation of the structure of graphene after mixing indicated that wet mixing is an appropriate dispersion method. Thereafter, the effects of two post-processing techniques such as repressing-annealing and hot isostatic pressing (HIP) on density, interfacial bonding, hardness, and thermal and electrical conductivity of the nanocomposites were analyzed. Density evaluations showed that the relative density of specimens increased after the post-processing steps so that after HIPing almost full densification was achieved. The Vickers hardness of specimens increased considerably after the post-processing techniques. The thermal conductivity of pure copper was very low in the case of the as-sintered samples containing 2 to 3 pct porosity and increased considerably to a maximum value in the case of HIPed samples which contained only 0.1 to 0.2 pct porosity. Electrical conductivity measurements showed that by increasing the graphene content electrical conductivity decreased.

  18. Unification of color postprocessing techniques for 3-dimensional computational mechanics

    NASA Technical Reports Server (NTRS)

    Bailey, Bruce Charles

    1985-01-01

    To facilitate the understanding of complex three-dimensional numerical models, advanced interactive color postprocessing techniques are introduced. These techniques are sufficiently flexible so that postprocessing difficulties arising from model size, geometric complexity, response variation, and analysis type can be adequately overcome. Finite element, finite difference, and boundary element models may be evaluated with the prototype postprocessor. Elements may be removed from parent models to be studied as independent subobjects. Discontinuous responses may be contoured including responses which become singular, and nonlinear color scales may be input by the user for the enhancement of the contouring operation. Hit testing can be performed to extract precise geometric, response, mesh, or material information from the database. In addition, stress intensity factors may be contoured along the crack front of a fracture model. Stepwise analyses can be studied, and the user can recontour responses repeatedly, as if he were paging through the response sets. As a system, these tools allow effective interpretation of complex analysis results.

  19. Efficient bit sifting scheme of post-processing in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong

    2015-10-01

    Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.

  20. CFD Process Pre- and Post-processing Automation in Support of Space Propulsion

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.

    2003-01-01

    The use of Computational Fluid Dynamics or CFD has become standard practice in the design and analysis of the major components used for space propulsion. In an attempt to standardize and improve the CFD process a series of automated tools have been developed. Through the use of these automated tools the application of CFD to the design cycle has been improved and streamlined. This paper presents a series of applications in which deficiencies were identified in the CFD process and corrected through the development of automated tools.

  1. NASA LeRC/Akron University Graduate Cooperative Fellowship Program and Graduate Student Researchers Program

    NASA Technical Reports Server (NTRS)

    Fertis, D. G.; Simon, A. L.

    1981-01-01

    The requisite methodology to solve linear and nonlinear problems associated with the static and dynamic analysis of rotating machinery, their static and dynamic behavior, and the interaction between the rotating and nonrotating parts of an engine is developed. Linear and nonlinear structural engine problems are investigated by developing solution strategies and interactive computational methods whereby the man and computer can communicate directly in making analysis decisions. Representative examples include modifying structural models, changing material, parameters, selecting analysis options and coupling with interactive graphical display for pre- and postprocessing capability.

  2. Streamflow forecasts from WRF precipitation for flood early warning in mountain tropical areas

    NASA Astrophysics Data System (ADS)

    Rogelis, María Carolina; Werner, Micha

    2018-02-01

    Numerical weather prediction (NWP) models are fundamental to extend forecast lead times beyond the concentration time of a watershed. Particularly for flash flood forecasting in tropical mountainous watersheds, forecast precipitation is required to provide timely warnings. This paper aims to assess the potential of NWP for flood early warning purposes, and the possible improvement that bias correction can provide, in a tropical mountainous area. The paper focuses on the comparison of streamflows obtained from the post-processed precipitation forecasts, particularly the comparison of ensemble forecasts and their potential in providing skilful flood forecasts. The Weather Research and Forecasting (WRF) model is used to produce precipitation forecasts that are post-processed and used to drive a hydrologic model. Discharge forecasts obtained from the hydrological model are used to assess the skill of the WRF model. The results show that post-processed WRF precipitation adds value to the flood early warning system when compared to zero-precipitation forecasts, although the precipitation forecast used in this analysis showed little added value when compared to climatology. However, the reduction of biases obtained from the post-processed ensembles show the potential of this method and model to provide usable precipitation forecasts in tropical mountainous watersheds. The need for more detailed evaluation of the WRF model in the study area is highlighted, particularly the identification of the most suitable parameterisation, due to the inability of the model to adequately represent the convective precipitation found in the study area.

  3. Time-Resolved Influences of Functional DAT1 and COMT Variants on Visual Perception and Post-Processing

    PubMed Central

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499

  4. Benchmarking surface selective vacuum ultraviolet and thermal postprocessing of thermoplastics for ultrasmooth 3-D-printed micro-optics

    NASA Astrophysics Data System (ADS)

    Kirchner, Robert; Chidambaram, Nachiappan; Schift, Helmut

    2018-04-01

    State-of-the-art, polymeric, refractive micro-optics simultaneously require an ultrasmooth three-dimensional (3-D) surface and a precise geometry for excellent optical performance with minimal stray light. In earlier work, we have established a surface finishing process for thermoplastic polymer master structures that is only effective on the surface and does not affect the designed optical geometry, thus enabling polishing without touching. Therewith, the high curvature corners of a 50-μm-tall optical diffuser device were maintained while the surface roughness was reduced to about 10-nm root mean square. For this, 3-D master structures were first fabricated by direct write laser-lithography with two-photon polymerization. The master structures were replicated into poly(methyl methacrylate) through a poly(dimethyl siloxane) intermediate replication stamp. Finally, all structures were surface-polished by selective high-energy photon exposure and thermal postprocessing. In this work, we focus on the comparison of the surface smoothening using either postprocessing or dedicated direct writing strategies. For this comparison, strategies for modifying the exposed voxel size and the writing discretization being the primary source of roughness were tested by sweeping the laser exposure dose for two different resist materials and objectives. In conclusion, the postprocessing smoothening resulted in a lower roughness compared to a direct writing strategy-even when 50-nm vertical discretization steps were used-and still enabled 10 times shorter writing times.

  5. Advanced non-contrasted computed tomography post-processing by CT-Calculometry (CT-CM) outperforms established predictors for the outcome of shock wave lithotripsy.

    PubMed

    Langenauer, J; Betschart, P; Hechelhammer, L; Güsewell, S; Schmid, H P; Engeler, D S; Abt, D; Zumstein, V

    2018-05-29

    To evaluate the predictive value of advanced non-contrasted computed tomography (NCCT) post-processing using novel CT-calculometry (CT-CM) parameters compared to established predictors of success of shock wave lithotripsy (SWL) for urinary calculi. NCCT post-processing was retrospectively performed in 312 patients suffering from upper tract urinary calculi who were treated by SWL. Established predictors such as skin to stone distance, body mass index, stone diameter or mean stone attenuation values were assessed. Precise stone size and shape metrics, 3-D greyscale measurements and homogeneity parameters such as skewness and kurtosis, were analysed using CT-CM. Predictive values for SWL outcome were analysed using logistic regression and receiver operating characteristics (ROC) statistics. Overall success rate (stone disintegration and no re-intervention needed) of SWL was 59% (184 patients). CT-CM metrics mainly outperformed established predictors. According to ROC analyses, stone volume and surface area performed better than established stone diameter, mean 3D attenuation value was a stronger predictor than established mean attenuation value, and parameters skewness and kurtosis performed better than recently emerged variation coefficient of stone density. Moreover, prediction of SWL outcome with 80% probability to be correct would be possible in a clearly higher number of patients (up to fivefold) using CT-CM-derived parameters. Advanced NCCT post-processing by CT-CM provides novel parameters that seem to outperform established predictors of SWL response. Implementation of these parameters into clinical routine might reduce SWL failure rates.

  6. A fast automatic target detection method for detecting ships in infrared scenes

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2016-05-01

    Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.

  7. Time-resolved influences of functional DAT1 and COMT variants on visual perception and post-processing.

    PubMed

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.

  8. Quality Evaluation of Zirconium Dioxide Frameworks Produced in Five Dental Laboratories from Different Countries.

    PubMed

    Schneebeli, Esther; Brägger, Urs; Scherrer, Susanne S; Keller, Andrea; Wittneben, Julia G; Hicklin, Stefan P

    2017-07-01

    The aim of this study was to assess and compare quality as well as economic aspects of CAD/CAM high strength ceramic three-unit FDP frameworks ordered from dental laboratories located in emerging countries and Switzerland. The master casts of six cases were sent to five dental laboratories located in Thailand (Bangkok), China (Peking and Shenzhen), Turkey (Izmir), and Switzerland (Bern). Each laboratory was using a different CAD/CAM system. The clinical fit of the frameworks was qualitatively assessed, and the thickness of the framework material, the connector height, the width, and the diameter were evaluated using a measuring sensor. The analysis of the internal fit of the frameworks was performed by means of a replica technique, whereas the inner and outer surfaces of the frameworks were evaluated for traces of postprocessing and damage to the intaglio surface with light and electronic microscopes. Groups (dental laboratories and cases) were compared for statistically significant differences using Mann-Whitney U-tests after Bonferroni correction. An acceptable clinical fit was found at 97.9% of the margins produced in laboratory E, 87.5% in B, 93.7% in C, 79.2% in A, and 62.5% in D. The mean framework thicknesses were not statistically significantly different for the premolar regions; however, for the molar area 4/8 of the evaluated sites were statistically significantly different. Circumference, surface, and width of the connectors produced in the different laboratories were statistically significantly different but not the height. There were great differences in the designs for the pontic and connector regions, and some of the frameworks would not be recommended for clinical use. Traces of heavy postprocessing were found in frameworks from some of the laboratories. The prices per framework ranged from US$177 to US$896. By ordering laboratory work in developing countries, a considerable price reduction was obtained compared to the price level in Switzerland. Despite the use of the standardized CAD/CAM chains of production in all laboratories, a large variability in the quality aspects, such as clinical marginal fit, connector and pontic design, as well as postprocessing traces was noted. Recommended sound handling of postprocessing was not applied in all laboratories. Dentists should be aware of the true and factitious advantages of CAD/CAM production chains and not lose control over the process. © 2015 by the American College of Prosthodontists.

  9. Development of efficient and cost-effective distributed hydrological modeling tool MWEasyDHM based on open-source MapWindow GIS

    NASA Astrophysics Data System (ADS)

    Lei, Xiaohui; Wang, Yuhui; Liao, Weihong; Jiang, Yunzhong; Tian, Yu; Wang, Hao

    2011-09-01

    Many regions are still threatened with frequent floods and water resource shortage problems in China. Consequently, the task of reproducing and predicting the hydrological process in watersheds is hard and unavoidable for reducing the risks of damage and loss. Thus, it is necessary to develop an efficient and cost-effective hydrological tool in China as many areas should be modeled. Currently, developed hydrological tools such as Mike SHE and ArcSWAT (soil and water assessment tool based on ArcGIS) show significant power in improving the precision of hydrological modeling in China by considering spatial variability both in land cover and in soil type. However, adopting developed commercial tools in such a large developing country comes at a high cost. Commercial modeling tools usually contain large numbers of formulas, complicated data formats, and many preprocessing or postprocessing steps that may make it difficult for the user to carry out simulation, thus lowering the efficiency of the modeling process. Besides, commercial hydrological models usually cannot be modified or improved to be suitable for some special hydrological conditions in China. Some other hydrological models are open source, but integrated into commercial GIS systems. Therefore, by integrating hydrological simulation code EasyDHM, a hydrological simulation tool named MWEasyDHM was developed based on open-source MapWindow GIS, the purpose of which is to establish the first open-source GIS-based distributed hydrological model tool in China by integrating modules of preprocessing, model computation, parameter estimation, result display, and analysis. MWEasyDHM provides users with a friendly manipulating MapWindow GIS interface, selectable multifunctional hydrological processing modules, and, more importantly, an efficient and cost-effective hydrological simulation tool. The general construction of MWEasyDHM consists of four major parts: (1) a general GIS module for hydrological analysis, (2) a preprocessing module for modeling inputs, (3) a model calibration module, and (4) a postprocessing module. The general GIS module for hydrological analysis is developed on the basis of totally open-source GIS software, MapWindow, which contains basic GIS functions. The preprocessing module is made up of three submodules including a DEM-based submodule for hydrological analysis, a submodule for default parameter calculation, and a submodule for the spatial interpolation of meteorological data. The calibration module contains parallel computation, real-time computation, and visualization. The postprocessing module includes model calibration and model results spatial visualization using tabular form and spatial grids. MWEasyDHM makes it possible for efficient modeling and calibration of EasyDHM, and promises further development of cost-effective applications in various watersheds.

  10. First principles pulse pile-up balance equation and fast deterministic solution

    NASA Astrophysics Data System (ADS)

    Sabbatucci, Lorenzo; Fernández, Jorge E.

    2017-08-01

    Pulse pile-up (PPU) is an always present effect which introduces a distortion into the spectrum measured with radiation detectors and that worsen with the increasing emission rate of the radiation source. It is fully ascribable to the pulse handling circuitry of the detector and it is not comprised in the detector response function which is well explained by a physical model. The PPU changes both the number and the height of the recorded pulses, which are related, respectively, with the number of detected particles and their energy. In the present work, it is derived a first principles balance equation for second order PPU to obtain a post-processing correction to apply to X-ray measurements. The balance equation is solved for the particular case of rectangular pulse shape using a deterministic iterative procedure for which it will be shown the convergence. The proposed method, deterministic rectangular PPU (DRPPU), requires minimum amount of information and, as example, it is applied to a solid state Si detector with active or off-line PPU suppression circuitry. A comparison shows that the results obtained with this fast and simple approach are comparable to those from the more sophisticated procedure using precise detector pulse shapes.

  11. A two-step method for retrieving the longitudinal profile of an electron bunch from its coherent radiation

    NASA Astrophysics Data System (ADS)

    Pelliccia, Daniele; Sen, Tanaji

    2014-11-01

    The coherent radiation emitted by an electron bunch provides a diagnostic signal that can be used to estimate its longitudinal distribution. Commonly only the amplitude of the intensity spectrum can be measured and the associated phase must be calculated to obtain the bunch profile. Very recently an iterative method was proposed to retrieve this phase. However ambiguities associated with non-uniqueness of the solution are always present in the phase retrieval procedure. Here we present a method to overcome the ambiguity problem by first performing multiple independent runs of the phase retrieval procedure and then second, sorting the good solutions by means of cross-correlation analysis. Results obtained with simulated bunches of various shapes and experimental measured spectra are presented, discussed and compared with the established Kramers-Kronig method. It is shown that even when the effect of the ambiguities is strong, as is the case for a double peak in the profile, the cross-correlation post-processing is able to filter out unwanted solutions. We show that, unlike the Kramers-Kronig method, the combined approach presented is able to faithfully reconstruct complicated bunch profiles.

  12. Importance of good manufacturing practices in microbiological monitoring in processing human tissues for transplant.

    PubMed

    Pianigiani, Elisa; Ierardi, Francesca; Fimiani, Michele

    2013-12-01

    Skin allografts represent an important therapeutic resource in the treatment of severe skin loss. The risk associated with application of processed tissues in humans is very low, however, human material always carries the risk of disease transmission. To minimise the risk of contamination of grafts, processing is carried out in clean rooms where air quality is monitored. Procedures and quality control tests are performed to standardise the production process and to guarantee the final product for human use. Since we only validate and distribute aseptic tissues, we conducted a study to determine what type of quality controls for skin processing are the most suitable for detecting processing errors and intercurrent contamination, and for faithfully mapping the process without unduly increasing production costs. Two different methods for quality control were statistically compared using the Fisher exact test. On the basis of the current study we selected our quality control procedure based on pre- and post-processing tissue controls, operator and environmental controls. Evaluation of the predictability of our control methods showed that tissue control was the most reliable method of revealing microbial contamination of grafts. We obtained 100 % sensitivity by doubling tissue controls, while maintaining high specificity (77 %).

  13. Joint correction of Nyquist artifact and minuscule motion-induced aliasing artifact in interleaved diffusion weighted EPI data using a composite two-dimensional phase correction procedure

    PubMed Central

    Chang, Hing-Chiu; Chen, Nan-kuei

    2016-01-01

    Diffusion-weighted imaging (DWI) obtained with interleaved echo-planar imaging (EPI) pulse sequence has great potential of characterizing brain tissue properties at high spatial-resolution. However, interleaved EPI based DWI data may be corrupted by various types of aliasing artifacts. First, inconsistencies in k-space data obtained with opposite readout gradient polarities result in Nyquist artifact, which is usually reduced with 1D phase correction in post-processing. When there exist eddy current cross terms (e.g., in oblique-plane EPI), 2D phase correction is needed to effectively reduce Nyquist artifact. Second, minuscule motion induced phase inconsistencies in interleaved DWI scans result in image-domain aliasing artifact, which can be removed with reconstruction procedures that take shot-to-shot phase variations into consideration. In existing interleaved DWI reconstruction procedures, Nyquist artifact and minuscule motion-induced aliasing artifact are typically removed subsequently in two stages. Although the two-stage phase correction generally performs well for non-oblique plane EPI data obtained from well-calibrated system, the residual artifacts may still be pronounced in oblique-plane EPI data or when there exist eddy current cross terms. To address this challenge, here we report a new composite 2D phase correction procedure, which effective removes Nyquist artifact and minuscule motion induced aliasing artifact jointly in a single step. Our experimental results demonstrate that the new 2D phase correction method can much more effectively reduce artifacts in interleaved EPI based DWI data as compared with the existing two-stage artifact correction procedures. The new method robustly enables high-resolution DWI, and should prove highly valuable for clinical uses and research studies of DWI. PMID:27114342

  14. Apollo: AN Automatic Procedure to Forecast Transport and Deposition of Tephra

    NASA Astrophysics Data System (ADS)

    Folch, A.; Costa, A.; Macedonio, G.

    2007-05-01

    Volcanic ash fallout represents a serious threat to communities around active volcanoes. Reliable short term predictions constitute a valuable support for to mitigate the effects of fallout on the surrounding area during an episode of crisis. We present a platform-independent automatic procedure aimed to daily forecast volcanic ash dispersal. The procedure builds on a series of programs and interfaces that allow an automatic data/results flow. Firstly the procedure downloads mesoscale meteorological forecasts for the region and period of interest, filters and converts data from its native format (typically GRIB format files), and sets up the CALMET diagnostic meteorological model to obtain hourly wind field and micro-meteorological variables on a finer mesh. Secondly a 1-D version of the buoyant plume equations assesses the distribution of mass along the eruptive column depending on the obtained wind field and on the conditions at the vent (granulometry, mass flow rate, etc.). All these data are used as input for the ash dispersion model(s). Any model able to face physical complexity and coupling processes with adequate solving times can be plugged into the system by means of an interface. Currently, the procedure contains the models HAZMAP, TEPHRA and FALL3D, the latter in both serial and parallel versions. Parallelization of FALL3D is done at two levels one for particle classes and one for spatial domain. The last step is to post-processes the model(s) outcomes to end up with homogeneous maps written on portable format files. Maps plot relevant quantities such as predicted ground load, expected deposit thickness or visual and flight safety concentration thresholds. Several applications are shown as examples.

  15. A diffusion-matched principal component analysis (DM-PCA) based two-channel denoising procedure for high-resolution diffusion-weighted MRI

    PubMed Central

    Chang, Hing-Chiu; Bilgin, Ali; Bernstein, Adam; Trouard, Theodore P.

    2018-01-01

    Over the past several years, significant efforts have been made to improve the spatial resolution of diffusion-weighted imaging (DWI), aiming at better detecting subtle lesions and more reliably resolving white-matter fiber tracts. A major concern with high-resolution DWI is the limited signal-to-noise ratio (SNR), which may significantly offset the advantages of high spatial resolution. Although the SNR of DWI data can be improved by denoising in post-processing, existing denoising procedures may potentially reduce the anatomic resolvability of high-resolution imaging data. Additionally, non-Gaussian noise induced signal bias in low-SNR DWI data may not always be corrected with existing denoising approaches. Here we report an improved denoising procedure, termed diffusion-matched principal component analysis (DM-PCA), which comprises 1) identifying a group of (not necessarily neighboring) voxels that demonstrate very similar magnitude signal variation patterns along the diffusion dimension, 2) correcting low-frequency phase variations in complex-valued DWI data, 3) performing PCA along the diffusion dimension for real- and imaginary-components (in two separate channels) of phase-corrected DWI voxels with matched diffusion properties, 4) suppressing the noisy PCA components in real- and imaginary-components, separately, of phase-corrected DWI data, and 5) combining real- and imaginary-components of denoised DWI data. Our data show that the new two-channel (i.e., for real- and imaginary-components) DM-PCA denoising procedure performs reliably without noticeably compromising anatomic resolvability. Non-Gaussian noise induced signal bias could also be reduced with the new denoising method. The DM-PCA based denoising procedure should prove highly valuable for high-resolution DWI studies in research and clinical uses. PMID:29694400

  16. Interim results of quality-control sampling of surface water for the Upper Colorado River National Water-Quality Assessment Study Unit, water years 1995-96

    USGS Publications Warehouse

    Spahr, N.E.; Boulger, R.W.

    1997-01-01

    Quality-control samples provide part of the information needed to estimate the bias and variability that result from sample collection, processing, and analysis. Quality-control samples of surface water collected for the Upper Colorado River National Water-Quality Assessment study unit for water years 1995?96 are presented and analyzed in this report. The types of quality-control samples collected include pre-processing split replicates, concurrent replicates, sequential replicates, post-processing split replicates, and field blanks. Analysis of the pre-processing split replicates, concurrent replicates, sequential replicates, and post-processing split replicates is based on differences between analytical results of the environmental samples and analytical results of the quality-control samples. Results of these comparisons indicate that variability introduced by sample collection, processing, and handling is low and will not affect interpretation of the environmental data. The differences for most water-quality constituents is on the order of plus or minus 1 or 2 lowest rounding units. A lowest rounding unit is equivalent to the magnitude of the least significant figure reported for analytical results. The use of lowest rounding units avoids some of the difficulty in comparing differences between pairs of samples when concentrations span orders of magnitude and provides a measure of the practical significance of the effect of variability. Analysis of field-blank quality-control samples indicates that with the exception of chloride and silica, no systematic contamination of samples is apparent. Chloride contamination probably was the result of incomplete rinsing of the dilute cleaning solution from the outlet ports of the decaport sample splitter. Silica contamination seems to have been introduced by the blank water. Sampling and processing procedures for water year 1997 have been modified as a result of these analyses.

  17. Enhancing the use of Argos satellite data for home range and long distance migration studies of marine animals.

    PubMed

    Hoenner, Xavier; Whiting, Scott D; Hindell, Mark A; McMahon, Clive R

    2012-01-01

    Accurately quantifying animals' spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68(th) percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤ 0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student's t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2 ± 2.4 km; mean home range overlap and error ratio = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes.

  18. Reducing the Primary Cesarean Birth Rate: A Quality Improvement Project.

    PubMed

    Javernick, Julie A; Dempsey, Amy

    2017-07-01

    Research continues to support vaginal birth as the safest mode of childbirth, but despite this, cesarean birth has become the most common surgical procedure performed on women. The rate has increased 500% since the 1970s without a corresponding improvement in maternal or neonatal outcomes. A Colorado community hospital recognized that its primary cesarean birth rate was higher than national and state benchmark levels. To reduce this rate, the hospital collaborated with its largest maternity care provider group to implement a select number of physiologic birth practices and measure improvement in outcomes. Using a pre- and postprocess measure study design, the quality improvement project team identified and implemented 3 physiologic birth parameters over a 12-month period that have been shown to promote vaginal birth. These included reducing elective induction of labor in women less than 41 weeks' gestation; standardizing triage to admit women at greater than or equal to 4 cm dilation; and increasing the use of intermittent auscultation as opposed to continuous fetal monitoring for fetal surveillance. The team also calculated each obstetrician-gynecologist's primary cesarean birth rate monthly and delivered these rates to the providers. Outcomes showed that the provider group decreased its primary cesarean birth rate from 28.9% to 12.2% in the 12-month postprocess measure period. The 57.8% decrease is statistically significant (odds ratio [OR], 0.345; z = 6.52, P < .001; 95% confidence interval [CI], 0.249-0.479). While this quality improvement project cannot be translated to other settings, promotion of physiologic birth practices, along with audit and feedback, had a statistically significant impact on the primary cesarean birth rate for this provider group and, consequently, on the community hospital where they attend births. © 2017 by the American College of Nurse-Midwives.

  19. Percutaneous Vertebroplasty: Preliminary Experiences with Rotational Acquisitions and 3D Reconstructions for Therapy Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodek-Wuerz, Roman; Martin, Jean-Baptiste; Wilhelm, Kai

    Percutaneous vertebroplasty (PVP) is carried out under fluoroscopic control in most centers. The exclusion of implant leakage and the assessment of implant distribution might be difficult to assess based on two-dimensional radiographic projection images only. We evaluated the feasibility of performing a follow-up examination after PVP with rotational acquisitions and volumetric reconstructions in the angio suite. Twenty consecutive patients underwent standard PVP procedures under fluoroscopic control. Immediate postprocedure evaluation of the implant distribution in the angio suite (BV 3000; Philips, The Netherlands) was performed using rotational acquisitions (typical parameters for the image acquisition included a 17-cm field-of-view, 200 acquired imagesmore » for a total angular range of 180{sup o}). Postprocessing of acquired volumetric datasets included multiplanar reconstruction (MPR), maximum intensity projection (MIP), and volume rendering technique (VRT) images that were displayed as two-dimensional slabs or as entire three-dimensional volumes. Image evaluation included lesion and implant assessment with special attention given to implant leakage. Findings from rotational acquisitions were compared to findings from postinterventional CT. The time to perform and to postprocess the rotational acquisitions was in all cases less then 10 min. Assessment of implant distribution after PVP using rotational image acquisition methods and volumetric reconstructions was possible in all patients. Cement distribution and potential leakage sites were visualized best on MIP images presented as slabs. From a total of 33 detected leakages with CT, 30 could be correctly detected by rotational image acquisition. Rotational image acquisitions and volumetric reconstruction methods provided a fast method to control radiographically the result of PVP in our cases.« less

  20. Improved associative recall of binary data in volume holographic memories

    NASA Astrophysics Data System (ADS)

    Betzos, George A.; Laisné, Alexandre; Mitkas, Pericles A.

    1999-11-01

    A new technique is presented that improves the results of associative recall in a volume holographic memory system. A background is added to the normal search argument to increase the amount of optical power that is used to reconstruct the reference beams in the crystal. This is combined with post-processing of the captured image of the reference beams. The use of both the background and post-processing greatly improves the results by allowing associative recall using small arguments. In addition, the number of false hits is reduced and misses are virtually eliminated.

  1. Multiple video sequences synchronization during minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Belhaoua, Abdelkrim; Moreau, Johan; Krebs, Alexandre; Waechter, Julien; Radoux, Jean-Pierre; Marescaux, Jacques

    2016-03-01

    Hybrid operating rooms are an important development in the medical ecosystem. They allow integrating, in the same procedure, the advantages of radiological imaging and surgical tools. However, one of the challenges faced by clinical engineers is to support the connectivity and interoperability of medical-electrical point-of-care devices. A system that could enable plug-and-play connectivity and interoperability for medical devices would improve patient safety, save hospitals time and money, and provide data for electronic medical records. In this paper, we propose a hardware platform dedicated to collect and synchronize multiple videos captured from medical equipment in real-time. The final objective is to integrate augmented reality technology into an operation room (OR) in order to assist the surgeon during a minimally invasive operation. To the best of our knowledge, there is no prior work dealing with hardware based video synchronization for augmented reality applications on OR. Whilst hardware synchronization methods can embed temporal value, so called timestamp, into each sequence on-the-y and require no post-processing, they require specialized hardware. However the design of our hardware is simple and generic. This approach was adopted and implemented in this work and its performance is evaluated by comparison to the start-of-the-art methods.

  2. Pre-processing liquid chromatography/high-resolution mass spectrometry data: extracting pure mass spectra by deconvolution from the invariance of isotopic distribution.

    PubMed

    Krishnan, Shaji; Verheij, Elwin E R; Bas, Richard C; Hendriks, Margriet W B; Hankemeier, Thomas; Thissen, Uwe; Coulier, Leon

    2013-05-15

    Mass spectra obtained by deconvolution of liquid chromatography/high-resolution mass spectrometry (LC/HRMS) data can be impaired by non-informative mass-over-charge (m/z) channels. This impairment of mass spectra can have significant negative influence on further post-processing, like quantification and identification. A metric derived from the knowledge of errors in isotopic distribution patterns, and quality of the signal within a pre-defined mass chromatogram block, has been developed to pre-select all informative m/z channels. This procedure results in the clean-up of deconvoluted mass spectra by maintaining the intensity counts from m/z channels that originate from a specific compound/molecular ion, for example, molecular ion, adducts, (13) C-isotopes, multiply charged ions and removing all m/z channels that are not related to the specific peak. The methodology has been successfully demonstrated for two sets of high-resolution LC/MS data. The approach described is therefore thought to be a useful tool in the automatic processing of LC/HRMS data. It clearly shows the advantages compared to other approaches like peak picking and de-isotoping in the sense that all information is retained while non-informative data is removed automatically. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Investigation of image distortion due to MCP electronic readout misalignment and correction via customized GUI application

    NASA Astrophysics Data System (ADS)

    Vitucci, G.; Minniti, T.; Tremsin, A. S.; Kockelmann, W.; Gorini, G.

    2018-04-01

    The MCP-based neutron counting detector is a novel device that allows high spatial resolution and time-resolved neutron radiography and tomography with epithermal, thermal and cold neutrons. Time resolution is possible by the high readout speeds of ~ 1200 frames/sec, allowing high resolution event counting with relatively high rates without spatial resolution degradation due to event overlaps. The electronic readout is based on a Timepix sensor, a CMOS pixel readout chip developed at CERN. Currently, a geometry of a quad Timepix detector is used with an active format of 28 × 28 mm2 limited by the size of the Timepix quad (2 × 2 chips) readout. Measurements of a set of high-precision micrometers test samples have been performed at the Imaging and Materials Science & Engineering (IMAT) beamline operating at the ISIS spallation neutron source (U.K.). The aim of these experiments was the full characterization of the chip misalignment and of the gaps between each pad in the quad Timepix sensor. Such misalignment causes distortions of the recorded shape of the sample analyzed. We present in this work a post-processing image procedure that considers and corrects these effects. Results of the correction will be discussed and the efficacy of this method evaluated.

  4. Fetal MRI: A Technical Update with Educational Aspirations

    PubMed Central

    Gholipour, Ali; Estroff, Judith A.; Barnewolt, Carol E.; Robertson, Richard L.; Grant, P. Ellen; Gagoski, Borjan; Warfield, Simon K.; Afacan, Onur; Connolly, Susan A.; Neil, Jeffrey J.; Wolfberg, Adam; Mulkern, Robert V.

    2015-01-01

    Fetal magnetic resonance imaging (MRI) examinations have become well-established procedures at many institutions and can serve as useful adjuncts to ultrasound (US) exams when diagnostic doubts remain after US. Due to fetal motion, however, fetal MRI exams are challenging and require the MR scanner to be used in a somewhat different mode than that employed for more routine clinical studies. Herein we review the techniques most commonly used, and those that are available, for fetal MRI with an emphasis on the physics of the techniques and how to deploy them to improve success rates for fetal MRI exams. By far the most common technique employed is single-shot T2-weighted imaging due to its excellent tissue contrast and relative immunity to fetal motion. Despite the significant challenges involved, however, many of the other techniques commonly employed in conventional neuro- and body MRI such as T1 and T2*-weighted imaging, diffusion and perfusion weighted imaging, as well as spectroscopic methods remain of interest for fetal MR applications. An effort to understand the strengths and limitations of these basic methods within the context of fetal MRI is made in order to optimize their use and facilitate implementation of technical improvements for the further development of fetal MR imaging, both in acquisition and post-processing strategies. PMID:26225129

  5. POCS-based reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE): a general algorithm for reducing motion-related artifacts

    PubMed Central

    Chu, Mei-Lan; Chang, Hing-Chiu; Chung, Hsiao-Wen; Truong, Trong-Kha; Bashir, Mustafa R.; Chen, Nan-kuei

    2014-01-01

    Purpose A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion weighted imaging (DWI). Theory Images with reduced artifacts are reconstructed with an iterative POCS procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. Methods The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved DWI data corresponding to different k-space trajectories and matrix condition numbers. Results Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. Conclusion POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods. PMID:25394325

  6. Improvements in Low-cost Ultrasonic Measurements of Blood Flow in "by-passes" Using Narrow & Broad Band Transit-time Procedures

    NASA Astrophysics Data System (ADS)

    Ramos, A.; Calas, H.; Diez, L.; Moreno, E.; Prohías, J.; Villar, A.; Carrillo, E.; Jiménez, A.; Pereira, W. C. A.; Von Krüger, M. A.

    The cardio-pathology by ischemia is an important cause of death, but the re-vascularization of coronary arteries (by-pass operation) is an useful solution to reduce associated morbidity improving quality of life in patients. During these surgeries, the flow in coronary vessels must be measured, using non-invasive ultrasonic methods, known as transit time flow measurements (TTFM), which are the most accurate option nowadays. TTFM is a common intra-operative tool, in conjunction with classic Doppler velocimetry, to check the quality of these surgery processes for implanting grafts in parallel with the coronary arteries. This work shows important improvements achieved in flow-metering, obtained in our research laboratories (CSIC, ICIMAF, COPPE) and tested under real surgical conditions in Cardiocentro-HHA, for both narrowband NB and broadband BB regimes, by applying results of a CYTED multinational project (Ultrasonic & computational systems for cardiovascular diagnostics). mathematical models and phantoms were created to evaluate accurately flow measurements, in laboratory conditions, before our new electronic designs and low-cost implementations, improving previous ttfm systems, which include analogic detection, acquisition & post-processing, and a portable PC. Both regimes (NB and BB), with complementary performances for different conditions, were considered. Finally, specific software was developed to offer facilities to surgeons in their interventions.

  7. Data harmonization of environmental variables: from simple to general solutions

    NASA Astrophysics Data System (ADS)

    Baume, O.

    2009-04-01

    European data platforms often contain measurements from different regional or national networks. As standards and protocols - e.g. type of measurement devices, sensors or measurement site classification, laboratory analysis and post-processing methods, vary between networks, discontinuities will appear when mapping the target variable at an international scale. Standardisation is generally a costly solution and does not allow classical statistical analysis of previously reported values. As an alternative, harmonization should be envisaged as an integrated step in mapping procedures across borders. In this paper, several harmonization solutions developed under the INTAMAP FP6 project are presented. The INTAMAP FP6 project is currently developing an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods to web-based implementations. Harmonization is often considered as a pre-processing step in statistical data analysis workflow. If biases are assessed with little knowledge about the target variable - in particular when no explanatory covariate is integrated, a harmonization procedure along borders or between regionally overlapping networks may be adopted (Skøien et al., 2007). In this case, bias is estimated as the systematic difference between line or local predictions. On the other hand, when covariates can be included in spatial prediction, the harmonization step is integrated in the whole model estimation procedure, and, therefore, is no longer an independent pre-processing step of the automatic mapping process (Baume et al., 2007). In this case, bias factors become integrated parameters of the geostatistical model and are estimated alongside the other model parameters. The harmonization methods developed within the INTAMAP project were first applied within the field of radiation, where the European Radiological Data Exchange Platform (EURDEP) - http://eurdep.jrc.ec.europa.eu/ - has been active for all member states for more than a decade (de Cort and de Vries, 1997). This database contains biases because of the different networks processes used in data reporting (Bossew et al., 2007). In a comparison study, monthly averaged Gamma dose measurements from eight European countries were using the methods described above. Baume et al. (2008) showed that both methods yield similar results and can detect and remove bias from the EURDEP database. To broaden the potential of the methods developed within the INTAMAP project, another application example taken from soil science is presented in this paper. The Carbon/Nitrogen (C/N) ratio of forest soils is one of the best predictors for evaluating soil functions such as used in climate change issues. Although soil samples were analyzed according to a common European laboratory method, Carré et al. (2008) concluded that systematic errors are introduced in the measurements due to calibration issues and instability of the sample. The application of the harmonization procedures showed that bias could be adequately removed, although the procedures have difficulty to distinguish real differences from bias.

  8. Semi-Automatic Determination of Rockfall Trajectories

    PubMed Central

    Volkwein, Axel; Klette, Johannes

    2014-01-01

    In determining rockfall trajectories in the field, it is essential to calibrate and validate rockfall simulation software. This contribution presents an in situ device and a complementary Local Positioning System (LPS) that allow the determination of parts of the trajectory. An assembly of sensors (herein called rockfall sensor) is installed in the falling block recording the 3D accelerations and rotational velocities. The LPS automatically calculates the position of the block along the slope over time based on Wi-Fi signals emitted from the rockfall sensor. The velocity of the block over time is determined through post-processing. The setup of the rockfall sensor is presented followed by proposed calibration and validation procedures. The performance of the LPS is evaluated by means of different experiments. The results allow for a quality analysis of both the obtained field data and the usability of the rockfall sensor for future/further applications in the field. PMID:25268916

  9. Snapshot linear-Stokes imaging spectropolarimeter using division-of-focal-plane polarimetry and integral field spectroscopy.

    PubMed

    Mu, Tingkui; Pacheco, Shaun; Chen, Zeyu; Zhang, Chunmin; Liang, Rongguang

    2017-02-13

    In this paper, the design and experimental demonstration of a snapshot linear-Stokes imaging spectropolarimeter (SLSIS) is presented. The SLSIS, which is based on division-of-focal-plane polarimetry with four parallel linear polarization channels and integral field spectroscopy with numerous slit dispersive paths, has no moving parts and provides video-rate Stokes-vector hyperspectral datacubes. It does not need any scanning in the spectral, spatial or polarization dimension and offers significant advantages of rapid reconstruction without heavy computation during post-processing. The principle and the experimental setup of the SLSIS are described in detail. The image registration, Stokes spectral reconstruction and calibration procedures are included, and the system is validated using measurements of tungsten light and a static scene. The SLSIS's snapshot ability to resolve polarization spectral signatures is demonstrated using measurements of a dynamic scene.

  10. Snapshot linear-Stokes imaging spectropolarimeter using division-of-focal-plane polarimetry and integral field spectroscopy

    PubMed Central

    Mu, Tingkui; Pacheco, Shaun; Chen, Zeyu; Zhang, Chunmin; Liang, Rongguang

    2017-01-01

    In this paper, the design and experimental demonstration of a snapshot linear-Stokes imaging spectropolarimeter (SLSIS) is presented. The SLSIS, which is based on division-of-focal-plane polarimetry with four parallel linear polarization channels and integral field spectroscopy with numerous slit dispersive paths, has no moving parts and provides video-rate Stokes-vector hyperspectral datacubes. It does not need any scanning in the spectral, spatial or polarization dimension and offers significant advantages of rapid reconstruction without heavy computation during post-processing. The principle and the experimental setup of the SLSIS are described in detail. The image registration, Stokes spectral reconstruction and calibration procedures are included, and the system is validated using measurements of tungsten light and a static scene. The SLSIS’s snapshot ability to resolve polarization spectral signatures is demonstrated using measurements of a dynamic scene. PMID:28191819

  11. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen , N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.

  12. Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data

    NASA Astrophysics Data System (ADS)

    Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.

    2018-01-01

    Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.

  13. Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database

    NASA Astrophysics Data System (ADS)

    Banitalebi-Dehkordi, Amin

    2017-03-01

    High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.

  14. Using pad‐stripped acausally filtered strong‐motion data

    USGS Publications Warehouse

    Boore, David; Sisi, Aida Azari; Akkar, Sinan

    2012-01-01

    Most strong‐motion data processing involves acausal low‐cut filtering, which requires the addition of sometimes lengthy zero pads to the data. These padded sections are commonly removed by organizations supplying data, but this can lead to incompatibilities in measures of ground motion derived in the usual way from the padded and the pad‐stripped data. One way around this is to use the correct initial conditions in the pad‐stripped time series when computing displacements, velocities, and linear oscillator response. Another way of ensuring compatibility is to use postprocessing of the pad‐stripped acceleration time series. Using 4071 horizontal and vertical acceleration time series from the Turkish strong‐motion database, we show that the procedures used by two organizations—ITACA (ITalian ACcelerometric Archive) and PEER NGA (Pacific Earthquake Engineering Research Center–Next Generation Attenuation)—lead to little bias and distortion of derived seismic‐intensity measures.

  15. Evaluating the convergence between eddy-covariance and biometric methods for assessing carbon budgets of forests.

    PubMed

    Campioli, M; Malhi, Y; Vicca, S; Luyssaert, S; Papale, D; Peñuelas, J; Reichstein, M; Migliavacca, M; Arain, M A; Janssens, I A

    2016-12-14

    The eddy-covariance (EC) micro-meteorological technique and the ecology-based biometric methods (BM) are the primary methodologies to quantify CO 2 exchange between terrestrial ecosystems and the atmosphere (net ecosystem production, NEP) and its two components, ecosystem respiration and gross primary production. Here we show that EC and BM provide different estimates of NEP, but comparable ecosystem respiration and gross primary production for forest ecosystems globally. Discrepancies between methods are not related to environmental or stand variables, but are consistently more pronounced for boreal forests where carbon fluxes are smaller. BM estimates are prone to underestimation of net primary production and overestimation of leaf respiration. EC biases are not apparent across sites, suggesting the effectiveness of standard post-processing procedures. Our results increase confidence in EC, show in which conditions EC and BM estimates can be integrated, and which methodological aspects can improve the convergence between EC and BM.

  16. Evaluating the convergence between eddy-covariance and biometric methods for assessing carbon budgets of forests

    NASA Astrophysics Data System (ADS)

    Campioli, M.; Malhi, Y.; Vicca, S.; Luyssaert, S.; Papale, D.; Peñuelas, J.; Reichstein, M.; Migliavacca, M.; Arain, M. A.; Janssens, I. A.

    2016-12-01

    The eddy-covariance (EC) micro-meteorological technique and the ecology-based biometric methods (BM) are the primary methodologies to quantify CO2 exchange between terrestrial ecosystems and the atmosphere (net ecosystem production, NEP) and its two components, ecosystem respiration and gross primary production. Here we show that EC and BM provide different estimates of NEP, but comparable ecosystem respiration and gross primary production for forest ecosystems globally. Discrepancies between methods are not related to environmental or stand variables, but are consistently more pronounced for boreal forests where carbon fluxes are smaller. BM estimates are prone to underestimation of net primary production and overestimation of leaf respiration. EC biases are not apparent across sites, suggesting the effectiveness of standard post-processing procedures. Our results increase confidence in EC, show in which conditions EC and BM estimates can be integrated, and which methodological aspects can improve the convergence between EC and BM.

  17. Site-Controlled Growth of Monolithic InGaAs/InP Quantum Well Nanopillar Lasers on Silicon.

    PubMed

    Schuster, Fabian; Kapraun, Jonas; Malheiros-Silveira, Gilliard N; Deshpande, Saniya; Chang-Hasnain, Connie J

    2017-04-12

    In this Letter, we report the site-controlled growth of InP nanolasers on a silicon substrate with patterned SiO 2 nanomasks by low-temperature metal-organic chemical vapor deposition, compatible with silicon complementary metal-oxide-semiconductor (CMOS) post-processing. A two-step growth procedure is presented to achieve smooth wurtzite faceting of vertical nanopillars. By incorporating InGaAs multiquantum wells, the nanopillar emission can be tuned over a wide spectral range. Enhanced quality factors of the intrinsic InP nanopillar cavities promote lasing at 0.87 and 1.21 μm, located within two important optical telecommunication bands. This is the first demonstration of a site-controlled III-V nanolaser monolithically integrated on silicon with a silicon-transparent emission wavelength, paving the way for energy-efficient on-chip optical links at typical telecommunication wavelengths.

  18. Evaluating the convergence between eddy-covariance and biometric methods for assessing carbon budgets of forests

    PubMed Central

    Campioli, M.; Malhi, Y.; Vicca, S.; Luyssaert, S.; Papale, D.; Peñuelas, J.; Reichstein, M.; Migliavacca, M.; Arain, M. A.; Janssens, I. A.

    2016-01-01

    The eddy-covariance (EC) micro-meteorological technique and the ecology-based biometric methods (BM) are the primary methodologies to quantify CO2 exchange between terrestrial ecosystems and the atmosphere (net ecosystem production, NEP) and its two components, ecosystem respiration and gross primary production. Here we show that EC and BM provide different estimates of NEP, but comparable ecosystem respiration and gross primary production for forest ecosystems globally. Discrepancies between methods are not related to environmental or stand variables, but are consistently more pronounced for boreal forests where carbon fluxes are smaller. BM estimates are prone to underestimation of net primary production and overestimation of leaf respiration. EC biases are not apparent across sites, suggesting the effectiveness of standard post-processing procedures. Our results increase confidence in EC, show in which conditions EC and BM estimates can be integrated, and which methodological aspects can improve the convergence between EC and BM. PMID:27966534

  19. SU-G-IeP1-01: A Novel MRI Post-Processing Algorithm for Visualization of the Prostate LDR Brachytherapy Seeds and Calcifications Based On B0 Field Inhomogeneity Correction and Hough Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosrati, R; Sunnybrook Health Sciences Centre, Toronto, Ontario; Soliman, A

    Purpose: This study aims at developing an MRI-only workflow for post-implant dosimetry of the prostate LDR brachytherapy seeds. The specific goal here is to develop a post-processing algorithm to produce positive contrast for the seeds and prostatic calcifications and differentiate between them on MR images. Methods: An agar-based phantom incorporating four dummy seeds (I-125) and five calcifications of different sizes (from sheep cortical bone) was constructed. Seeds were placed arbitrarily in the coronal plane. The phantom was scanned with 3T Philips Achieva MR scanner using an 8-channel head coil array. Multi-echo turbo spin echo (ME-TSE) and multi-echo gradient recalled echomore » (ME-GRE) sequences were acquired. Due to minimal susceptibility artifacts around seeds, ME-GRE sequence (flip angle=15; TR/TE=20/2.3/2.3; resolution=0.7×0.7×2mm3) was further processed.The induced field inhomogeneity due to the presence of titaniumencapsulated seeds was corrected using a B0 field map. B0 map was calculated using the ME-GRE sequence by calculating the phase difference at two different echo times. Initially, the product of the first echo and B0 map was calculated. The features corresponding to the seeds were then extracted in three steps: 1) the edge pixels were isolated using “Prewitt” operator; 2) the Hough transform was employed to detect ellipses approximately matching the dimensions of the seeds and 3) at the position and orientation of the detected ellipses an ellipse was drawn on the B0-corrected image. Results: The proposed B0-correction process produced positive contrast for the seeds and calcifications. The Hough transform based on Prewitt edge operator successfully identified all the seeds according to their ellipsoidal shape and dimensions in the edge image. Conclusion: The proposed post-processing algorithm successfully visualized the seeds and calcifications with positive contrast and differentiates between them according to their shapes. Further assessments on more realistic phantoms and patient study are required to validate the outcome.« less

  20. Self-diagnosis of damage in fibrous composites using electrical resistance measurements

    NASA Astrophysics Data System (ADS)

    Kang, Ji Ho; Paty, Spandana; Kim, Ran Y.; Tandon, G. P.

    2006-03-01

    The objective of this research was to develop a practical integrated approach using extracted features from electrical resistance measurements and coupled electromechanical models of damage, for in situ damage detection and sensing in carbon fiber reinforced plastic (CFRP) composite structures. To achieve this objective, we introduced specific known damage (in terms of type, size, and location) into CFRP laminates and established quantitative relationships with the electrical resistance measurements. For processing of numerous measurement data, an autonomous data acquisition system was devised. We also established a specimen preparation procedure and a method for electrode setup. Coupon and panel CFRP laminate specimens with several known damage were tested and post-processed with the measurement data. Coupon specimens with various sizes of artificial delaminations obtained by inserting Teflon film were manufactured and the resistance was measured. The measurement results showed that increase of delamination size led to increase of resistance implying that it is possible to sense the existence and size of delamination. Encouraged by the results of coupon specimens, we implemented the measurement system on panel specimens. Three different quasi-isotropic panels were designed and manufactured: a panel with artificial delamination by inserting Teflon film at the midplane, a panel with artificial delamination by inserting Teflon film between the second and third plies from the surface, and an undamaged panel. The first two panels were designed to determine the feasibility of detecting delamination using the developed measurement system. The third panel had no damage at first, and then three different sizes of holes were drilled at a chosen location. Panels were prepared using the established procedures with six electrode connections on each side making a total of twenty-four electrode connections for a panel. All possible pairs of electrodes were scanned and the resistance was measured for each pair. The measurement results showed the possibility of the established measurement system for an in-situ damage detection method for CFRP composite structures.

  1. Multivariate postprocessing techniques for probabilistic hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2016-04-01

    Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power generation, Applied Energy, 96, 12-20, DOI: 10.1016/j.apenergy.2011.11.004. Schefzik, R., T. L. Thorarinsdottir, and T. Gneiting (2013), Uncertainty quantification in complex simulation models using ensemble copula coupling, Statistical Science, 28, 616-640, DOI: 10.1214/13-STS443.

  2. First In Vivo Use of a Capacitive Micromachined Ultrasound Transducer Array–Based Imaging and Ablation Catheter

    PubMed Central

    Stephens, Douglas N.; Truong, Uyen T.; Nikoozadeh, Amin; Oralkan, Ömer; Seo, Chi Hyung; Cannata, Jonathan; Dentinger, Aaron; Thomenius, Kai; de la Rama, Alan; Nguyen, Tho; Lin, Feng; Khuri-Yakub, Pierre; Mahajan, Aman; Shivkumar, Kalyanam; O’Donnell, Matt; Sahn, David J.

    2012-01-01

    Objectives The primary objective was to test in vivo for the first time the general operation of a new multifunctional intracardiac echocardiography (ICE) catheter constructed with a microlinear capacitive micromachined ultrasound transducer (ML-CMUT) imaging array. Secondarily, we examined the compatibility of this catheter with electroanatomic mapping (EAM) guidance and also as a radiofrequency ablation (RFA) catheter. Preliminary thermal strain imaging (TSI)-derived temperature data were obtained from within the endocardium simultaneously during RFA to show the feasibility of direct ablation guidance procedures. Methods The new 9F forward-looking ICE catheter was constructed with 3 complementary technologies: a CMUT imaging array with a custom electronic array buffer, catheter surface electrodes for EAM guidance, and a special ablation tip, that permits simultaneous TSI and RFA. In vivo imaging studies of 5 anesthetized porcine models with 5 CMUT catheters were performed. Results The ML-CMUT ICE catheter provided high-resolution real-time wideband 2-dimensional (2D) images at greater than 8 MHz and is capable of both RFA and EAM guidance. Although the 24-element array aperture dimension is only 1.5 mm, the imaging depth of penetration is greater than 30 mm. The specially designed ultrasound-compatible metalized plastic tip allowed simultaneous imaging during ablation and direct acquisition of TSI data for tissue ablation temperatures. Postprocessing analysis showed a first-order correlation between TSI and temperature, permitting early development temperature-time relationships at specific myocardial ablation sites. Conclusions Multifunctional forward-looking ML-CMUT ICE catheters, with simultaneous intracardiac guidance, ultrasound imaging, and RFA, may offer a new means to improve interventional ablation procedures. PMID:22298868

  3. Subsurface Stress Fields in FCC Single Crystal Anisotropic Contacts

    NASA Technical Reports Server (NTRS)

    Arakere, Nagaraj K.; Knudsen, Erik; Swanson, Gregory R.; Duke, Gregory; Ham-Battista, Gilda

    2004-01-01

    Single crystal superalloy turbine blades used in high pressure turbomachinery are subject to conditions of high temperature, triaxial steady and alternating stresses, fretting stresses in the blade attachment and damper contact locations, and exposure to high-pressure hydrogen. The blades are also subjected to extreme variations in temperature during start-up and shutdown transients. The most prevalent high cycle fatigue (HCF) failure modes observed in these blades during operation include crystallographic crack initiation/propagation on octahedral planes, and non-crystallographic initiation with crystallographic growth. Numerous cases of crack initiation and crack propagation at the blade leading edge tip, blade attachment regions, and damper contact locations have been documented. Understanding crack initiation/propagation under mixed-mode loading conditions is critical for establishing a systematic procedure for evaluating HCF life of single crystal turbine blades. This paper presents analytical and numerical techniques for evaluating two and three dimensional subsurface stress fields in anisotropic contacts. The subsurface stress results are required for evaluating contact fatigue life at damper contacts and dovetail attachment regions in single crystal nickel-base superalloy turbine blades. An analytical procedure is presented for evaluating the subsurface stresses in the elastic half-space, based on the adaptation of a stress function method outlined by Lekhnitskii. Numerical results are presented for cylindrical and spherical anisotropic contacts, using finite element analysis (FEA). Effects of crystal orientation on stress response and fatigue life are examined. Obtaining accurate subsurface stress results for anisotropic single crystal contact problems require extremely refined three-dimensional (3-D) finite element grids, especially in the edge of contact region. Obtaining resolved shear stresses (RSS) on the principal slip planes also involves considerable post-processing work. For these reasons it is very advantageous to develop analytical solution schemes for subsurface stresses, whenever possible.

  4. HEPEX - achievements and challenges!

    NASA Astrophysics Data System (ADS)

    Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan

    2014-05-01

    HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Madison Theresa; Bates, Cameron Russell; Mckigney, Edward Allen

    Accurate detector modeling is a requirement to design systems in many non-proliferation scenarios; by determining a Detector’s Response Function (DRF) to incident radiation, it is possible characterize measurements of unknown sources. DRiFT is intended to post-process MCNP® output and create realistic detector spectra. Capabilities currently under development include the simulation of semiconductor, gas, and (as is discussed in this work) scintillator detector physics. Energy spectra and pulse shape discrimination (PSD) trends for incident photon and neutron radiation have been reproduced by DRiFT.

  6. Postprocessing for Air Quality Predictions

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.

    2017-12-01

    In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.

  7. Evaluation of infrared thermography as a diagnostic tool in CVD applications

    NASA Astrophysics Data System (ADS)

    Johnson, E. J.; Hyer, P. V.; Culotta, P. W.; Clark, I. O.

    1998-05-01

    This research is focused on the feasibility of using infrared temperature measurements on the exterior of a chemical vapor deposition (CVD) reactor to ascertain both real-time information on the operating characteristics of a CVD system and provide data which could be post-processed to provide quantitative information for research and development on CVD processes. Infrared thermography techniques were used to measure temperatures on a horizontal CVD reactor of rectangular cross section which were correlated with the internal gas flow field, as measured with the laser velocimetry (LV) techniques. For the reactor tested, thermal profiles were well correlated with the gas flow field inside the reactor. Correlations are presented for nitrogen and hydrogen carrier gas flows. The infrared data were available to the operators in real time with sufficient sensitivity to the internal flow field so that small variations such as misalignment of the reactor inlet could be observed. The same data were post-processed to yield temperature measurements at known locations on the reactor surface. For the experiments described herein, temperatures associated with approximately 3.3 mm 2 areas on the reactor surface were obtained with a precision of ±2°C. These temperature measurements were well suited for monitoring a CVD production reactor, development of improved thermal boundary conditions for use in CFD models of reactors, and for verification of expected thermal conditions.

  8. Cobalt Oxide Nanosheet and CNT Micro Carbon Monoxide Sensor Integrated with Readout Circuit on Chip

    PubMed Central

    Dai, Ching-Liang; Chen, Yen-Chi; Wu, Chyan-Chyi; Kuo, Chin-Fu

    2010-01-01

    The study presents a micro carbon monoxide (CO) sensor integrated with a readout circuit-on-a-chip manufactured by the commercial 0.35 μm complementary metal oxide semiconductor (CMOS) process and a post-process. The sensing film of the sensor is a composite cobalt oxide nanosheet and carbon nanotube (CoOOH/CNT) film that is prepared by a precipitation-oxidation method. The structure of the CO sensor is composed of a polysilicon resistor and a sensing film. The sensor, which is of a resistive type, changes its resistance when the sensing film adsorbs or desorbs CO gas. The readout circuit is used to convert the sensor resistance into the voltage output. The post-processing of the sensor includes etching the sacrificial layers and coating the sensing film. The advantages of the sensor include room temperature operation, short response/recovery times and easy post-processing. Experimental results show that the sensitivity of the CO sensor is about 0.19 mV/ppm, and the response and recovery times are 23 s and 34 s for 200 ppm CO, respectively. PMID:22294897

  9. Cobalt oxide nanosheet and CNT micro carbon monoxide sensor integrated with readout circuit on chip.

    PubMed

    Dai, Ching-Liang; Chen, Yen-Chi; Wu, Chyan-Chyi; Kuo, Chin-Fu

    2010-01-01

    The study presents a micro carbon monoxide (CO) sensor integrated with a readout circuit-on-a-chip manufactured by the commercial 0.35 μm complementary metal oxide semiconductor (CMOS) process and a post-process. The sensing film of the sensor is a composite cobalt oxide nanosheet and carbon nanotube (CoOOH/CNT) film that is prepared by a precipitation-oxidation method. The structure of the CO sensor is composed of a polysilicon resistor and a sensing film. The sensor, which is of a resistive type, changes its resistance when the sensing film adsorbs or desorbs CO gas. The readout circuit is used to convert the sensor resistance into the voltage output. The post-processing of the sensor includes etching the sacrificial layers and coating the sensing film. The advantages of the sensor include room temperature operation, short response/recovery times and easy post-processing. Experimental results show that the sensitivity of the CO sensor is about 0.19 mV/ppm, and the response and recovery times are 23 s and 34 s for 200 ppm CO, respectively.

  10. Control of laser-ablated aluminum surface wettability to superhydrophobic or superhydrophilic through simple heat treatment or water boiling post-processing

    NASA Astrophysics Data System (ADS)

    Ngo, Chi-Vinh; Chun, Doo-Man

    2018-03-01

    Recently, controlling the wettability of a metallic surface so that it is either superhydrophobic or superhydrophilic has become important for many applications. However, conventional techniques require long fabrication times or involve toxic chemicals. Herein, through a combination of pulse laser ablation and simple post-processing, the surface of aluminum was controlled to either superhydrophobic or superhydrophilic in a short time of only a few hours. In this study, grid patterns were first fabricated on aluminum using a nanosecond pulsed laser, and then additional post-processing without any chemicals was used. Under heat treatment, the surface became superhydrophobic with a contact angle (CA) greater than 150° and a sliding angle (SA) lower than 10°. Conversely, when immersed in boiling water, the surface became superhydrophilic with a low contact angle. The mechanism for wettability change was also explained. The surfaces, obtained in a short time with environmentally friendly fabrication and without the use of toxic chemicals, could potentially be applied in various industry and manufacturing applications such as self-cleaning, anti-icing, and biomedical devices.

  11. [Virtual otoscopy--technique, indications and initial experiences with multislice spiral CT].

    PubMed

    Klingebiel, R; Bauknecht, H C; Lehmann, R; Rogalla, P; Werbs, M; Behrbohm, H; Kaschke, O

    2000-11-01

    We report the standardized postprocessing of high-resolution CT data acquired by incremental CT and multi-slice CT in patients with suspected middle ear disorders to generate three-dimensional endoluminal views known as virtual otoscopy. Subsequent to the definition of a postprocessing protocol, standardized endoluminal views of the middle ear were generated according to their otological relevance. The HRCT data sets of 26 ENT patients were transferred to a workstation and postprocessed to 52 virtual otoscopies. Generation of predefined endoluminal views from the HRCT data sets was possible in all patients. Virtual endoscopic views added meaningful information to the primary cross-sectional data in patients suffering from ossicular pathology, having contraindications for invasive tympanic endoscopy or being assessed for surgery of the tympanic cavity. Multi slice CT improved the visualization of subtle anatomic details such as the stapes suprastructure and reduced the scanning time. Virtual endoscopy allows for the non invasive endoluminal visualization of various tympanic lesions. Use of the multi-slice CT technique reduces the scanning time and improves image quality in terms of detail resolution.

  12. Databases post-processing in Tensoral

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1994-01-01

    The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.

  13. Pre-processing and post-processing in group-cluster mergers

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, R.; Ricker, P. M.

    2013-11-01

    Galaxies in clusters are more likely to be of early type and to have lower star formation rates than galaxies in the field. Recent observations and simulations suggest that cluster galaxies may be `pre-processed' by group or filament environments and that galaxies that fall into a cluster as part of a larger group can stay coherent within the cluster for up to one orbital period (`post-processing'). We investigate these ideas by means of a cosmological N-body simulation and idealized N-body plus hydrodynamics simulations of a group-cluster merger. We find that group environments can contribute significantly to galaxy pre-processing by means of enhanced galaxy-galaxy merger rates, removal of galaxies' hot halo gas by ram pressure stripping and tidal truncation of their galaxies. Tidal distortion of the group during infall does not contribute to pre-processing. Post-processing is also shown to be effective: galaxy-galaxy collisions are enhanced during a group's pericentric passage within a cluster, the merger shock enhances the ram pressure on group and cluster galaxies and an increase in local density during the merger leads to greater galactic tidal truncation.

  14. Multidisciplinary Design Optimization for Glass-Fiber Epoxy-Matrix Composite 5 MW Horizontal-Axis Wind-Turbine Blades

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Sellappan, V.; Vallejo, A.; Ozen, M.

    2010-11-01

    A multi-disciplinary design-optimization procedure has been introduced and used for the development of cost-effective glass-fiber reinforced epoxy-matrix composite 5 MW horizontal-axis wind-turbine (HAWT) blades. The turbine-blade cost-effectiveness has been defined using the cost of energy (CoE), i.e., a ratio of the three-blade HAWT rotor development/fabrication cost and the associated annual energy production. To assess the annual energy production as a function of the blade design and operating conditions, an aerodynamics-based computational analysis had to be employed. As far as the turbine blade cost is concerned, it is assessed for a given aerodynamic design by separately computing the blade mass and the associated blade-mass/size-dependent production cost. For each aerodynamic design analyzed, a structural finite element-based and a post-processing life-cycle assessment analyses were employed in order to determine a minimal blade mass which ensures that the functional requirements pertaining to the quasi-static strength of the blade, fatigue-controlled blade durability and blade stiffness are satisfied. To determine the turbine-blade production cost (for the currently prevailing fabrication process, the wet lay-up) available data regarding the industry manufacturing experience were combined with the attendant blade mass, surface area, and the duration of the assumed production run. The work clearly revealed the challenges associated with simultaneously satisfying the strength, durability and stiffness requirements while maintaining a high level of wind-energy capture efficiency and a lower production cost.

  15. Analysis of the structural behaviour of colonic segments by inflation tests: Experimental activity and physio-mechanical model.

    PubMed

    Carniel, Emanuele L; Mencattelli, Margherita; Bonsignori, Gabriella; Fontanella, Chiara G; Frigo, Alessandro; Rubini, Alessandro; Stefanini, Cesare; Natali, Arturo N

    2015-11-01

    A coupled experimental and computational approach is provided for the identification of the structural behaviour of gastrointestinal regions, accounting for both elastic and visco-elastic properties. The developed procedure is applied to characterize the mechanics of gastrointestinal samples from pig colons. Experimental data about the structural behaviour of colonic segments are provided by inflation tests. Different inflation processes are performed according to progressively increasing top pressure conditions. Each inflation test consists of an air in-flow, according to an almost constant increasing pressure rate, such as 3.5 mmHg/s, up to a prescribed top pressure, which is held constant for about 300 s to allow the development of creep phenomena. Different tests are interspersed by 600 s of rest to allow the recovery of the tissues' mechanical condition. Data from structural tests are post-processed by a physio-mechanical model in order to identify the mechanical parameters that interpret both the non-linear elastic behaviour of the sample, as the instantaneous pressure-stretch trend, and the time-dependent response, as the stretch increase during the creep processes. The parameters are identified by minimizing the discrepancy between experimental and model results. Different sets of parameters are evaluated for different specimens from different pigs. A statistical analysis is performed to evaluate the distribution of the parameters and to assess the reliability of the experimental and computational activities. © IMechE 2015.

  16. A fast response miniature probe for wet steam flow field measurements

    NASA Astrophysics Data System (ADS)

    Bosdas, Ilias; Mansour, Michel; Kalfas, Anestis I.; Abhari, Reza S.

    2016-12-01

    Modern steam turbines require operational flexibility due to renewable energies’ increasing share of the electrical grid. Additionally, the continuous increase in energy demand necessitates efficient design of the steam turbines as well as power output augmentation. The long turbine rotor blades at the machines’ last stages are prone to mechanical vibrations and as a consequence time-resolved experimental data under wet steam conditions are essential for the development of large-scale low-pressure steam turbines. This paper presents a novel fast response miniature heated probe for unsteady wet steam flow field measurements. The probe has a tip diameter of 2.5 mm, and a miniature heater cartridge ensures uncontaminated pressure taps from condensed water. The probe is capable of providing the unsteady flow angles, total and static pressure as well as the flow Mach number. The operating principle and calibration procedure are described in the current work and a detailed uncertainty analysis demonstrates the capability of the new probe to perform accurate flow field measurements under wet steam conditions. In order to exclude any data possibly corrupted by droplets’ impact or evaporation from the heating process, a filtering algorithm was developed and implemented in the post-processing phase of the measured data. In the last part of this paper the probe is used in an experimental steam turbine test facility and measurements are conducted at the inlet and exit of the last stage with an average wetness mass fraction of 8.0%.

  17. Development of an accurate and high-throughput methodology for structural comprehension of chlorophylls derivatives. (I) Phytylated derivatives.

    PubMed

    Chen, Kewei; Ríos, José Julián; Pérez-Gálvez, Antonio; Roca, María

    2015-08-07

    Phytylated chlorophyll derivatives undergo specific oxidative reactions through the natural metabolism or during food processing or storage, and consequently pyro-, 13(2)-hydroxy-, 15(1)-hydroxy-lactone chlorophylls, and pheophytins (a and b) are originated. New analytical procedures have been developed here to reproduce controlled oxidation reactions that specifically, and in reasonable amounts, produce those natural target standards. At the same time and under the same conditions, 16 natural chlorophyll derivatives have been analyzed by APCI-HPLC-hrMS(2) and most of them by the first time. The combination of the high-resolution MS mode with powerful post-processing software has allowed the identification of new fragmentation patterns, characterizing specific product ions for some particular standards. In addition, new hypotheses and reaction mechanisms for the established MS(2)-based reactions have been proposed. As a general rule, the main product ions involve the phytyl and the propionic chains but the introduction of oxygenated functional groups at the isocyclic ring produces new and specific productions and at the same time inhibits some particular fragmentations. It is noteworthy that all b derivatives, except 15(1)-hydroxy-lactone compounds, undergo specific CO losses. We propose a new reaction mechanism based in the structural configuration of a and b chlorophyll derivatives that explain the exclusive CO fragmentation in all b series except for 15(1)-hydroxy-lactone b and all a series compounds. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A Knowledge-Based Approach to Automatic Detection of Equipment Alarm Sounds in a Neonatal Intensive Care Unit Environment.

    PubMed

    Raboshchuk, Ganna; Nadeu, Climent; Jancovic, Peter; Lilja, Alex Peiro; Kokuer, Munevver; Munoz Mahamud, Blanca; Riverola De Veciana, Ana

    2018-01-01

    A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%.

  19. Rapid Screening of Chemical Constituents in Rhizoma Anemarrhenae by UPLC-Q-TOF/MS Combined with Data Postprocessing Techniques

    PubMed Central

    Shan, Lanlan; Wu, Yuanyuan; Yuan, Lei; Zhang, Yani

    2017-01-01

    Rhizoma Anemarrhenae, a famous traditional Chinese medicine (TCM), is the dried rhizome of Anemarrhena asphodeloides Bge. (Anemarrhena Bunge of Liliaceae). The medicine presents anti-inflammatory, antipyretic, sedative, and diuretic effects. The chemical constituents of Rhizoma Anemarrhenae are complex and diverse, mainly including steroidal saponins, flavonoids, phenylpropanoids, benzophenones, and alkaloids. In this study, UPLC-Q-TOF/MS was used in combination with data postprocessing techniques, including characteristic fragments filter and neutral loss filter, to rapidly classify and identify the five types of substances in Rhizoma Anemarrhenae. On the basis of numerous literature reviews and according to the corresponding characteristic fragments produced by different types of compounds in combination with neutral loss filtering, we summarized the fragmentation patterns of the main five types of compounds and successfully screened and identified 32 chemical constituents in Rhizoma Anemarrhenae. The components included 18 steroidal saponins, 6 flavonoids, 4 phenylpropanoids, 2 alkaloids, and 2 benzophenones. The method established in this study provided necessary data for the study on the pharmacological effects of Rhizoma Anemarrhenae and also provided the basis for the chemical analysis and quality control of TCMs to promote the development of a method for chemical research on TCMs. PMID:29234389

  20. Rapid Screening of Chemical Constituents in Rhizoma Anemarrhenae by UPLC-Q-TOF/MS Combined with Data Postprocessing Techniques.

    PubMed

    Shan, Lanlan; Wu, Yuanyuan; Yuan, Lei; Zhang, Yani; Xu, Yanyan; Li, Yubo

    2017-01-01

    Rhizoma Anemarrhenae , a famous traditional Chinese medicine (TCM), is the dried rhizome of Anemarrhena asphodeloides Bge. ( Anemarrhena Bunge of Liliaceae). The medicine presents anti-inflammatory, antipyretic, sedative, and diuretic effects. The chemical constituents of Rhizoma Anemarrhenae are complex and diverse, mainly including steroidal saponins, flavonoids, phenylpropanoids, benzophenones, and alkaloids. In this study, UPLC-Q-TOF/MS was used in combination with data postprocessing techniques, including characteristic fragments filter and neutral loss filter, to rapidly classify and identify the five types of substances in Rhizoma Anemarrhenae . On the basis of numerous literature reviews and according to the corresponding characteristic fragments produced by different types of compounds in combination with neutral loss filtering, we summarized the fragmentation patterns of the main five types of compounds and successfully screened and identified 32 chemical constituents in Rhizoma Anemarrhenae . The components included 18 steroidal saponins, 6 flavonoids, 4 phenylpropanoids, 2 alkaloids, and 2 benzophenones. The method established in this study provided necessary data for the study on the pharmacological effects of Rhizoma Anemarrhenae and also provided the basis for the chemical analysis and quality control of TCMs to promote the development of a method for chemical research on TCMs.

  1. A Knowledge-Based Approach to Automatic Detection of Equipment Alarm Sounds in a Neonatal Intensive Care Unit Environment

    PubMed Central

    Nadeu, Climent; Jančovič, Peter; Lilja, Alex Peiró; Köküer, Münevver; Muñoz Mahamud, Blanca; Riverola De Veciana, Ana

    2018-01-01

    A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%. PMID:29404227

  2. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    PubMed

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.

  3. Optimization of Search Engines and Postprocessing Approaches to Maximize Peptide and Protein Identification for High-Resolution Mass Data.

    PubMed

    Tu, Chengjian; Sheng, Quanhu; Li, Jun; Ma, Danjun; Shen, Xiaomeng; Wang, Xue; Shyr, Yu; Yi, Zhengping; Qu, Jun

    2015-11-06

    The two key steps for analyzing proteomic data generated by high-resolution MS are database searching and postprocessing. While the two steps are interrelated, studies on their combinatory effects and the optimization of these procedures have not been adequately conducted. Here, we investigated the performance of three popular search engines (SEQUEST, Mascot, and MS Amanda) in conjunction with five filtering approaches, including respective score-based filtering, a group-based approach, local false discovery rate (LFDR), PeptideProphet, and Percolator. A total of eight data sets from various proteomes (e.g., E. coli, yeast, and human) produced by various instruments with high-accuracy survey scan (MS1) and high- or low-accuracy fragment ion scan (MS2) (LTQ-Orbitrap, Orbitrap-Velos, Orbitrap-Elite, Q-Exactive, Orbitrap-Fusion, and Q-TOF) were analyzed. It was found combinations involving Percolator achieved markedly more peptide and protein identifications at the same FDR level than the other 12 combinations for all data sets. Among these, combinations of SEQUEST-Percolator and MS Amanda-Percolator provided slightly better performances for data sets with low-accuracy MS2 (ion trap or IT) and high accuracy MS2 (Orbitrap or TOF), respectively, than did other methods. For approaches without Percolator, SEQUEST-group performs the best for data sets with MS2 produced by collision-induced dissociation (CID) and IT analysis; Mascot-LFDR gives more identifications for data sets generated by higher-energy collisional dissociation (HCD) and analyzed in Orbitrap (HCD-OT) and in Orbitrap Fusion (HCD-IT); MS Amanda-Group excels for the Q-TOF data set and the Orbitrap Velos HCD-OT data set. Therefore, if Percolator was not used, a specific combination should be applied for each type of data set. Moreover, a higher percentage of multiple-peptide proteins and lower variation of protein spectral counts were observed when analyzing technical replicates using Percolator-associated combinations; therefore, Percolator enhanced the reliability for both identification and quantification. The analyses were performed using the specific programs embedded in Proteome Discoverer, Scaffold, and an in-house algorithm (BuildSummary). These results provide valuable guidelines for the optimal interpretation of proteomic results and the development of fit-for-purpose protocols under different situations.

  4. Swirling flow in a model of the carotid artery: Numerical and experimental study

    NASA Astrophysics Data System (ADS)

    Kotmakova, Anna A.; Gataulin, Yakov A.; Yukhnev, Andrey D.

    2018-05-01

    The present contribution is aimed at numerical and experimental study of inlet swirling flow in a model of the carotid artery. Flow visualization is performed both with the ultrasound color Doppler imaging mode and with CFD data postprocessing of swirling flows in a carotid artery model. Special attention is paid to obtaining data for the secondary motion in the internal carotid artery. Principal errors of the measurement technique developed are estimated using the results of flow calculations.

  5. Precise GPS orbits for geodesy

    NASA Technical Reports Server (NTRS)

    Colombo, Oscar L.

    1994-01-01

    The Global Positioning System (GPS) has become, in recent years, the main space-based system for surveying and navigation in many military, commercial, cadastral, mapping, and scientific applications. Better receivers, interferometric techniques (DGPS), and advances in post-processing methods have made possible to position fixed or moving receivers with sub-decimeter accuracies in a global reference frame. Improved methods for obtaining the orbits of the GPS satellites have played a major role in these achievements; this paper gives a personal view of the main developments in GPS orbit determination.

  6. Contact dynamics recording and analysis system using an optical fiber sensor approach

    NASA Astrophysics Data System (ADS)

    Anghel, F.; Pavelescu, D.; Grattan, K. T. V.; Palmer, A. W.

    1997-09-01

    A contact dynamics recording and analysis system configured using an optical fiber sensor has been developed having been designed with a particular application to the accurate and time-varying description of moving contact operating during electrical arc breaking, in an experimental platform simulating the operation of a vacuum circuit breaker. The system utilizes dynamic displacement measurement and data recording and a post-process data analysis to reveal the dynamic speed and acceleration data of the equipment.

  7. ACME Priority Metrics (A-PRIME)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J; Zender, Charlie; Van Roekel, Luke

    A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.

  8. PIV Data Validation Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  9. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.

  10. A TRACER METHOD FOR COMPUTING TYPE IA SUPERNOVA YIELDS: BURNING MODEL CALIBRATION, RECONSTRUCTION OF THICKENED FLAMES, AND VERIFICATION FOR PLANAR DETONATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Townsley, Dean M.; Miles, Broxton J.; Timmes, F. X.

    2016-07-01

    We refine our previously introduced parameterized model for explosive carbon–oxygen fusion during thermonuclear Type Ia supernovae (SNe Ia) by adding corrections to post-processing of recorded Lagrangian fluid-element histories to obtain more accurate isotopic yields. Deflagration and detonation products are verified for propagation in a medium of uniform density. A new method is introduced for reconstructing the temperature–density history within the artificially thick model deflagration front. We obtain better than 5% consistency between the electron capture computed by the burning model and yields from post-processing. For detonations, we compare to a benchmark calculation of the structure of driven steady-state planar detonationsmore » performed with a large nuclear reaction network and error-controlled integration. We verify that, for steady-state planar detonations down to a density of 5 × 10{sup 6} g cm{sup −3}, our post-processing matches the major abundances in the benchmark solution typically to better than 10% for times greater than 0.01 s after the passage of the shock front. As a test case to demonstrate the method, presented here with post-processing for the first time, we perform a two-dimensional simulation of a SN Ia in the scenario of a Chandrasekhar-mass deflagration–detonation transition (DDT). We find that reconstruction of deflagration tracks leads to slightly more complete silicon burning than without reconstruction. The resulting abundance structure of the ejecta is consistent with inferences from spectroscopic studies of observed SNe Ia. We confirm the absence of a central region of stable Fe-group material for the multi-dimensional DDT scenario. Detailed isotopic yields are tabulated and change only modestly when using deflagration reconstruction.« less

  11. Visual grading analysis of digital neonatal chest phantom X-ray images: Impact of detector type, dose and image processing on image quality.

    PubMed

    Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L

    2018-07-01

    To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.

  12. Performance of post-processing algorithms for rainfall intensity using measurements from tipping-bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Stagnaro, Mattia; Colli, Matteo; Lanza, Luca Giovanni; Chan, Pak Wai

    2016-11-01

    Eight rainfall events recorded from May to September 2013 at Hong Kong International Airport (HKIA) have been selected to investigate the performance of post-processing algorithms used to calculate the rainfall intensity (RI) from tipping-bucket rain gauges (TBRGs). We assumed a drop-counter catching-type gauge as a working reference and compared rainfall intensity measurements with two calibrated TBRGs operated at a time resolution of 1 min. The two TBRGs differ in their internal mechanics, one being a traditional single-layer dual-bucket assembly, while the other has two layers of buckets. The drop-counter gauge operates at a time resolution of 10 s, while the time of tipping is recorded for the two TBRGs. The post-processing algorithms employed for the two TBRGs are based on the assumption that the tip volume is uniformly distributed over the inter-tip period. A series of data of an ideal TBRG is reconstructed using the virtual time of tipping derived from the drop-counter data. From the comparison between the ideal gauge and the measurements from the two real TBRGs, the performances of different post-processing and correction algorithms are statistically evaluated over the set of recorded rain events. The improvement obtained by adopting the inter-tip time algorithm in the calculation of the RI is confirmed. However, by comparing the performance of the real and ideal TBRGs, the beneficial effect of the inter-tip algorithm is shown to be relevant for the mid-low range (6-50 mmh-1) of rainfall intensity values (where the sampling errors prevail), while its role vanishes with increasing RI in the range where the mechanical errors prevail.

  13. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  14. Pulmonary nodule detection with digital projection radiography: an ex-vivo study on increased latitude post-processing.

    PubMed

    Biederer, Juergen; Gottwald, Tobias; Bolte, Hendrik; Riedel, Christian; Freitag, Sandra; Van Metter, Richard; Heller, Martin

    2007-04-01

    To evaluate increased image latitude post-processing of digital projection radiograms for the detection of pulmonary nodules. 20 porcine lungs were inflated inside a chest phantom, prepared with 280 solid nodules of 4-8 mm in diameter and examined with direct radiography (3.0x2.5 k detector, 125 kVp, 4 mAs). Nodule position and size were documented by CT controls and dissection. Four intact lungs served as negative controls. Image post-processing included standard tone scales and increased latitude with detail contrast enhancement (log-factors 1.0, 1.5 and 2.0). 1280 sub-images (512x512 pixel) were centred on nodules or controls, behind the diaphragm and over free parenchyma, randomized and presented to six readers. Confidence in the decision was recorded with a scale of 0-100%. Sensitivity and specificity for nodules behind the diaphragm were 0.87/0.97 at standard tone scale and 0.92/0.92 with increased latitude (log factor 2.0). The fraction of "not diagnostic" readings was reduced (from 208/1920 to 52/1920). As an indicator of increased detection confidence, the median of the ratings behind the diaphragm approached 100 and 0, respectively, and the inter-quartile width decreased (controls: p<0.001, nodules: p=0.239) at higher image latitude. Above the diaphragm, accuracy and detection confidence remained unchanged. Here, the sensitivity for nodules was 0.94 with a specificity from 0.96 to 0.97 (all p>0.05). Increased latitude post-processing has minimal effects on the overall accuracy, but improves the detection confidence for sub-centimeter nodules in the posterior recesses of the lung.

  15. Advancing Nucleosynthesis in Core-Collapse Supernovae Models Using 2D CHIMERA Simulations

    NASA Astrophysics Data System (ADS)

    Harris, J. A.; Hix, W. R.; Chertkow, M. A.; Bruenn, S. W.; Lentz, E. J.; Messer, O. B.; Mezzacappa, A.; Blondin, J. M.; Marronetti, P.; Yakunin, K.

    2014-01-01

    The deaths of massive stars as core-collapse supernovae (CCSN) serve as a crucial link in understanding galactic chemical evolution since the birth of the universe via the Big Bang. We investigate CCSN in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species α-network. However, the limited capacity of the α-network to accurately evolve detailed composition, the neutronization and the nuclear energy generation rate has fettered the ability of prior CCSN simulations to accurately reproduce the chemical abundances and energy distributions as known from observations. These deficits can be partially ameliorated by "post-processing" with a more realistic network. Lagrangian tracer particles placed throughout the star record the temporal evolution of the initial simulation and enable the extension of the nuclear network evolution by incorporating larger systems in post-processing nucleosynthesis calculations. We present post-processing results of the four ab initio axisymmetric CCSN 2D models of Bruenn et al. (2013) evolved with the smaller α-network, and initiated from stellar metallicity, non-rotating progenitors of mass 12, 15, 20, and 25 M⊙ from Woosley & Heger (2007). As a test of the limitations of post-processing, we provide preliminary results from an ongoing simulation of the 15 M⊙ model evolved with a realistic 150 species nuclear reaction network in situ. With more accurate energy generation rates and an improved determination of the thermodynamic trajectories of the tracer particles, we can better unravel the complicated multidimensional "mass-cut" in CCSN simulations and probe for less energetically significant nuclear processes like the νp-process and the r-process, which require still larger networks.

  16. Advances of lab-on-a-chip in isolation, detection and post-processing of circulating tumour cells.

    PubMed

    Yu, Ling; Ng, Shu Rui; Xu, Yang; Dong, Hua; Wang, Ying Jun; Li, Chang Ming

    2013-08-21

    Circulating tumour cells (CTCs) are shed by primary tumours and are found in the peripheral blood of patients with metastatic cancers. Recent studies have shown that the number of CTCs corresponds with disease severity and prognosis. Therefore, detection and further functional analysis of CTCs are important for biomedical science, early diagnosis of cancer metastasis and tracking treatment efficacy in cancer patients, especially in point-of-care applications. Over the last few years, there has been an increasing shift towards not only capturing and detecting these rare cells, but also ensuring their viability for post-processing, such as cell culture and genetic analysis. High throughput lab-on-a-chip (LOC) has been fuelled up to process and analyse heterogeneous real patient samples while gaining profound insights for cancer biology. In this review, we highlight how miniaturisation strategies together with nanotechnologies have been used to advance LOC for capturing, separating, enriching and detecting different CTCs efficiently, while meeting the challenges of cell viability, high throughput multiplex or single-cell detection and post-processing. We begin this survey with an introduction to CTC biology, followed by description of the use of various materials, microstructures and nanostructures for design of LOC to achieve miniaturisation, as well as how various CTC capture or separation strategies can enhance cell capture and enrichment efficiencies, purity and viability. The significant progress of various nanotechnologies-based detection techniques to achieve high sensitivities and low detection limits for viable CTCs and/or to enable CTC post-processing are presented and the fundamental insights are also discussed. Finally, the challenges and perspectives of the technologies are enumerated.

  17. IR-based spot weld NDT in automotive applications

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Feng, Zhili

    2015-05-01

    Today's auto industry primarily relies on destructive teardown evaluation to ensure the quality of the resistance spot welds (RSWs) due to their criticality in crash resistance and performance of vehicles. The destructive teardown evaluation is labor intensive and costly. The very nature of the destructive test means only a few selected welds will be sampled for quality. Most of the welds in a car are never checked. There are significant costs and risks associated with reworking and scrapping the defective welded parts made between the teardown tests. IR thermography as a non-destructive testing (NDT) tool has its distinct advantage — its non-intrusive and non-contact nature. This makes the IR based NDT especially attractive for the highly automated assembly lines. IR for weld quality inspection has been explored in the past, mostly limited to the offline post-processing manner in a laboratory environment. No online real-time RSW inspection using IR thermography has been reported. Typically for postprocessing inspection, a short-pulse heating via xenon flash lamp light (in a few milliseconds) is applied to the surface of a spot weld. However, applications in the auto industry have been unsuccessful, largely due to a critical drawback that cannot be implemented in the high-volume production line - the prerequisite of painting the weld surface to eliminate surface reflection and other environmental interference. This is due to the low signal-to-noise ratio resulting from the low/unknown surface emissivity and the very small temperature changes (typically on the order of 0.1°C) induced by the flash lamp method. An integrated approach consisting of innovations in both data analysis algorithms and hardware apparatus that effectively solved the key technical barriers for IR NDT. The system can be used for both real-time (during welding) and post-processing inspections (after welds have been made). First, we developed a special IR thermal image processing method that utilizes the relative IR intensity change, so that the influence of surface reflection and environment interference can be reduced. Second, for the post-processing inspection, a special induction heater is used to replace the flash lamp, resulting in temperature changes on the order of 10°C. As a result, the signal-to-noise ratio increased by several orders of magnitudes with no surface painting needed, and the inspection results are more accurate and reliable. For real-time inspection, the heat from welding (with temperature exceeding 1000°C) was utilized. Third, "thermal signatures" were identified to uniquely correlate to different weld quality attributes through computational modeling of heat transfer and extensive testing of specially designed ranges of welding conditions. Novel IR image analysis algorithms that automatically and intelligently identify the "thermal signatures" from the IR images and positively determine the weld quality in less than a second were developed.

  18. Development of a Low-cost, Comprehensive Recording System for Circadian Rhythm Behavior.

    PubMed

    Kwon, Jea; Park, Min Gu; Lee, Seung Eun; Lee, C Justin

    2018-02-01

    Circadian rhythm is defined as a 24-hour biological oscillation, which persists even without any external cues but also can be re-entrained by various environmental cues. One of the widely accepted circadian rhythm behavioral experiment is measuring the wheel-running activity (WRA) of rodents. However, the price for commercially available WRA recording system is not easily affordable for researchers due to high-cost implementation of sensors for wheel rotation. Here, we developed a cost-effective and comprehensive system for circadian rhythm recording by measuring the house-keeping activities (HKA). We have monitored animal's HKA as electrical signal by simply connecting animal housing cage with a standard analog/digital converter: input to the metal lid and ground to the metal grid floor. We show that acquired electrical signals are combined activities of eating, drinking and natural locomotor behaviors which are well-known indicators of circadian rhythm. Post-processing of measured electrical signals enabled us to draw actogram, which verifies HKA to be reliable circadian rhythm indicator. To provide easy access of HKA recording system for researchers, we have developed user-friendly MATLAB-based software, Circa Analysis. This software provides functions for easy extraction of scalable "touch activity" from raw data files by automating seven steps of post-processing and drawing actograms with highly intuitive user-interface and various options. With our cost-effective HKA circadian rhythm recording system, we have estimated the cost of our system to be less than $150 per channel. We anticipate our system will benefit many researchers who would like to study circadian rhythm.

  19. Interface design of VSOP'94 computer code for safety analysis

    NASA Astrophysics Data System (ADS)

    Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi

    2014-09-01

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  20. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas

    NASA Astrophysics Data System (ADS)

    Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.

    2017-11-01

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.

  1. Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Chegwidden, O.

    2017-12-01

    Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.

  2. Ultraearly assessed reperfusion status after middle cerebral artery recanalization predicting clinical outcome.

    PubMed

    Gölitz, P; Muehlen, I; Gerner, S T; Knossalla, F; Doerfler, A

    2018-06-01

    Mechanical thrombectomy has high evidence in stroke therapy; however, successful recanalization guarantees not a favorable clinical outcome. We aimed to quantitatively assess the reperfusion status ultraearly after successful middle cerebral artery (MCA) recanalization to identify flow parameters that potentially allow predicting clinical outcome. Sixty-seven stroke patients with acute MCA occlusion, undergoing recanalization, were enrolled. Using parametric color coding, a post-processing algorithm, pre-, and post-interventional digital subtraction angiography series were evaluated concerning the following parameters: pre- and post-procedural cortical relative time to peak (rTTP) of MCA territory, reperfusion time, and index. Functional long-term outcome was assessed by the 90-day modified Rankin Scale score (mRS; favorable: 0-2). Cortical rTTP was significantly shorter before (3.33 ± 1.36 seconds; P = .03) and after intervention (2.05 ± 0.70 seconds; P = .003) in patients with favorable clinical outcome. Additionally, age (P = .005) and initial National Institutes of Health Stroke Scale score (P = .02) were significantly different between the patients, whereas reperfusion index and time as well as initially estimated infarct size were not. In multivariate analysis, only post-procedural rTTP (P = .005) was independently associated with favorable clinical outcome. 2.29 seconds for post-procedural rTTP might be a threshold to predict favorable clinical outcome. Ultraearly quantitative assessment of reperfusion status after successful MCA recanalization reveals post-procedural cortical rTTP as possible independent prognostic value in predicting favorable clinical outcome, even determining a threshold value might be possible. In consequence, focusing stroke therapy on microcirculatory patency could be valuable to improve outcome. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Automatic detection of articulation disorders in children with cleft lip and palate.

    PubMed

    Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria

    2009-11-01

    Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.

  4. The introduction of capillary structures in 4D simulated vascular tree for ART 3.5D algorithm further validation

    NASA Astrophysics Data System (ADS)

    Barra, Beatrice; El Hadji, Sara; De Momi, Elena; Ferrigno, Giancarlo; Cardinale, Francesco; Baselli, Giuseppe

    2017-03-01

    Several neurosurgical procedures, such as Artero Venous Malformations (AVMs), aneurysm embolizations and StereoElectroEncephaloGraphy (SEEG) require accurate reconstruction of the cerebral vascular tree, as well as the classification of arteries and veins, in order to increase the safety of the intervention. Segmentation of arteries and veins from 4D CT perfusion scans has already been proposed in different studies. Nonetheless, such procedures require long acquisition protocols and the radiation dose given to the patient is not negligible. Hence, space is open to approaches attempting to recover the dynamic information from standard Contrast Enhanced Cone Beam Computed Tomography (CE-CBCT) scans. The algorithm proposed by our team is called ART 3.5 D. It is a novel algorithm based on the postprocessing of both the angiogram and the raw data of a standard Digital Subtraction Angiography from a CBCT (DSACBCT) allowing arteries and veins segmentation and labeling without requiring any additional radiation exposure for the patient and neither lowering the resolution. In addition, while in previous versions of the algorithm just the distinction of arteries and veins was considered, here the capillary phase simulation and identification is introduced, in order to increase further information useful for more precise vasculature segmentation.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bargellini, Irene, E-mail: irenebargellini@hotmail.com; Turini, Francesca; Bozzi, Elena

    To assess feasibility of proper hepatic artery catheterization using a 3D model obtained from preprocedural computed tomographic angiography (CTA), fused with real-time fluoroscopy, during transarterial chemoembolization of hepatocellular carcinoma. Twenty consecutive cirrhotic patients with hepatocellular carcinoma undergoing transarterial chemoembolization were prospectively enrolled onto the study. The early arterial phase axial images of the preprocedural CTA were postprocessed on an independent workstation connected to the angiographic system (Innova 4100; GE Healthcare, Milwaukee, WI), obtaining a 3D volume rendering image (VR) that included abdominal aorta, splanchnic arteries, and first and second lumbar vertebrae. The VR image was manually registered to the real-timemore » X-ray fluoroscopy, with the lumbar spine used as the reference. The VR image was then used as guidance to selectively catheterize the proper hepatic artery. The procedure was considered successful when performed with no need for intraarterial contrast injections or angiographic acquisitions. The procedure was successful in 19 (95 %) of 20 patients. In one patient, celiac trunk angiography was required for the presence of a significant ostial stenosis that was underestimated at computed tomography. Time for image reconstruction and registration was <10 min in all cases. The use of preprocedural CTA model with fluoroscopy enables confident and direct catheterization of the proper hepatic artery with no need for preliminary celiac trunk angiography, thus reducing radiation exposure and contrast media administration.« less

  6. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.

    1984-01-01

    A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed are: (1) capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) investigation and simulation of various control methods including manual force/torque and active compliances control; (5) evaluation and implementation of three obstacle avoidance methods; (6) video simulation and edge detection; and (7) software simulation validation.

  7. Post-processing for improving hyperspectral anomaly detection accuracy

    NASA Astrophysics Data System (ADS)

    Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang

    2015-10-01

    Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.

  8. Photoshop tips and tricks every facial plastic surgeon should know.

    PubMed

    Hamilton, Grant S

    2010-05-01

    Postprocessing of patient photographs is an important skill for the facial plastic surgeon. Postprocessing is intended to optimize the image, not change the surgical result. This article refers to use of Photoshop CS3 (Adobe Systems Incorporated, San Jose, CA, USA) for descriptions, but any recent version of Photoshop is sufficiently similar. Topics covered are types of camera, shooting formats, color balance, alignment of preoperative and postoperative photographs, and preparing figures for publication. Each section presents step-by-step guidance and instructions along with a graphic depiction of the computer screen and Photoshop tools under discussion. Copyright 2010 Elsevier Inc. All rights reserved.

  9. Optimal quantum observables

    NASA Astrophysics Data System (ADS)

    Haapasalo, Erkka; Pellonpää, Juha-Pekka

    2017-12-01

    Various forms of optimality for quantum observables described as normalized positive-operator-valued measures (POVMs) are studied in this paper. We give characterizations for observables that determine the values of the measured quantity with probabilistic certainty or a state of the system before or after the measurement. We investigate observables that are free from noise caused by classical post-processing, mixing, or pre-processing of quantum nature. Especially, a complete characterization of pre-processing and post-processing clean observables is given, and necessary and sufficient conditions are imposed on informationally complete POVMs within the set of pure states. We also discuss joint and sequential measurements of optimal quantum observables.

  10. Shortcomings of low-cost imaging systems for viewing computed radiographs.

    PubMed

    Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N

    2000-01-01

    To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.

  11. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  12. Pre- and post-processing for Cosmic/NASTRAN on personal computers and mainframes

    NASA Technical Reports Server (NTRS)

    Kamel, H. A.; Mobley, A. V.; Nagaraj, B.; Watkins, K. W.

    1986-01-01

    An interface between Cosmic/NASTRAN and GIFTS has recently been released, combining the powerful pre- and post-processing capabilities of GIFTS with Cosmic/NASTRAN's analysis capabilities. The interface operates on a wide range of computers, even linking Cosmic/NASTRAN and GIFTS when the two are on different computers. GIFTS offers a wide range of elements for use in model construction, each translated by the interface into the nearest Cosmic/NASTRAN equivalent; and the options of automatic or interactive modelling and loading in GIFTS make pre-processing easy and effective. The interface itself includes the programs GFTCOS, which creates the Cosmic/NASTRAN input deck (and, if desired, control deck) from the GIFTS Unified Data Base, COSGFT, which translates the displacements from the Cosmic/NASTRAN analysis back into GIFTS; and HOSTR, which handles stress computations for a few higher-order elements available in the interface, but not supported by the GIFTS processor STRESS. Finally, the versatile display options in GIFTS post-processing allow the user to examine the analysis results through an especially wide range of capabilities, including such possibilities as creating composite loading cases, plotting in color and animating the analysis.

  13. Using Meteorological Analogues for Reordering Postprocessed Precipitation Ensembles in Hydrological Forecasting

    NASA Astrophysics Data System (ADS)

    Bellier, Joseph; Bontron, Guillaume; Zin, Isabella

    2017-12-01

    Meteorological ensemble forecasts are nowadays widely used as input of hydrological models for probabilistic streamflow forecasting. These forcings are frequently biased and have to be statistically postprocessed, using most of the time univariate techniques that apply independently to individual locations, lead times and weather variables. Postprocessed ensemble forecasts therefore need to be reordered so as to reconstruct suitable multivariate dependence structures. The Schaake shuffle and ensemble copula coupling are the two most popular methods for this purpose. This paper proposes two adaptations of them that make use of meteorological analogues for reconstructing spatiotemporal dependence structures of precipitation forecasts. Performances of the original and adapted techniques are compared through a multistep verification experiment using real forecasts from the European Centre for Medium-Range Weather Forecasts. This experiment evaluates not only multivariate precipitation forecasts but also the corresponding streamflow forecasts that derive from hydrological modeling. Results show that the relative performances of the different reordering methods vary depending on the verification step. In particular, the standard Schaake shuffle is found to perform poorly when evaluated on streamflow. This emphasizes the crucial role of the precipitation spatiotemporal dependence structure in hydrological ensemble forecasting.

  14. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.

    PubMed

    Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.

  15. StagLab: Post-Processing and Visualisation in Geodynamics

    NASA Astrophysics Data System (ADS)

    Crameri, Fabio

    2017-04-01

    Despite being simplifications of nature, today's Geodynamic numerical models can, often do, and sometimes have to become very complex. Additionally, a steadily-increasing amount of raw model data results from more elaborate numerical codes and the still continuously-increasing computational power available for their execution. The current need for efficient post-processing and sensible visualisation is thus apparent. StagLab (www.fabiocrameri.ch/software) provides such much-needed strongly-automated post-processing in combination with state-of-the-art visualisation. Written in MatLab, StagLab is simple, flexible, efficient and reliable. It produces figures and movies that are both fully-reproducible and publication-ready. StagLab's post-processing capabilities include numerous diagnostics for plate tectonics and mantle dynamics. Featured are accurate plate-boundary identification, slab-polarity recognition, plate-bending derivation, mantle-plume detection, and surface-topography component splitting. These and many other diagnostics are derived conveniently from only a few parameter fields thanks to powerful image processing tools and other capable algorithms. Additionally, StagLab aims to prevent scientific visualisation pitfalls that are, unfortunately, still too common in the Geodynamics community. Misinterpretation of raw data and exclusion of colourblind people introduced with the continuous use of the rainbow (a.k.a. jet) colour scheme is just one, but a dramatic example (e.g., Rogowitz and Treinish, 1998; Light and Bartlein, 2004; Borland and Ii, 2007). StagLab is currently optimised for binary StagYY output (e.g., Tackley 2008), but is adjustable for the potential use with other Geodynamic codes. Additionally, StagLab's post-processing routines are open-source. REFERENCES Borland, D., and R. M. T. Ii (2007), Rainbow color map (still) considered harmful, IEEE Computer Graphics and Applications, 27(2), 14-17. Light, A., and P. J. Bartlein (2004), The end of the rainbow? Color schemes for improved data graphics, Eos Trans. AGU, 85(40), 385-391. Rogowitz, B. E., and L. A. Treinish (1998), Data visualization: the end of the rainbow, IEEE Spectrum, 35(12), 52-59, doi:10.1109/6.736450. Tackley, P.J (2008) Modelling compressible mantle convection with large viscosity contrasts in a three-dimensional spherical shell using the yin-yang grid. Physics of the Earth and Planetary Interiors 171(1-4), 7-18.

  16. Custom 3D Printable Silicones with Tunable Stiffness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durban, Matthew M.; Lenhardt, Jeremy M.; Wu, Amanda S.

    Silicone elastomers have broad versatility within a variety of potential advanced materials applications, such as soft robotics, biomedical devices, and metamaterials. Furthermore, a series of custom 3D printable silicone inks with tunable stiffness is developed, formulated, and characterized. The silicone inks exhibit excellent rheological behavior for 3D printing, as observed from the printing of porous structures with controlled architectures. Here, the capability to tune the stiffness of printable silicone materials via careful control over the chemistry, network formation, and crosslink density of the ink formulations in order to overcome the challenging interplay between ink development, post-processing, material properties, and performancemore » is demonstrated.« less

  17. Custom 3D Printable Silicones with Tunable Stiffness

    DOE PAGES

    Durban, Matthew M.; Lenhardt, Jeremy M.; Wu, Amanda S.; ...

    2017-12-06

    Silicone elastomers have broad versatility within a variety of potential advanced materials applications, such as soft robotics, biomedical devices, and metamaterials. Furthermore, a series of custom 3D printable silicone inks with tunable stiffness is developed, formulated, and characterized. The silicone inks exhibit excellent rheological behavior for 3D printing, as observed from the printing of porous structures with controlled architectures. Here, the capability to tune the stiffness of printable silicone materials via careful control over the chemistry, network formation, and crosslink density of the ink formulations in order to overcome the challenging interplay between ink development, post-processing, material properties, and performancemore » is demonstrated.« less

  18. The development of a 3D risk analysis method.

    PubMed

    I, Yet-Pole; Cheng, Te-Lung

    2008-05-01

    Much attention has been paid to the quantitative risk analysis (QRA) research in recent years due to more and more severe disasters that have happened in the process industries. Owing to its calculation complexity, very few software, such as SAFETI, can really make the risk presentation meet the practice requirements. However, the traditional risk presentation method, like the individual risk contour in SAFETI, is mainly based on the consequence analysis results of dispersion modeling, which usually assumes that the vapor cloud disperses over a constant ground roughness on a flat terrain with no obstructions and concentration fluctuations, which is quite different from the real situations of a chemical process plant. All these models usually over-predict the hazardous regions in order to maintain their conservativeness, which also increases the uncertainty of the simulation results. On the other hand, a more rigorous model such as the computational fluid dynamics (CFD) model can resolve the previous limitations; however, it cannot resolve the complexity of risk calculations. In this research, a conceptual three-dimensional (3D) risk calculation method was proposed via the combination of results of a series of CFD simulations with some post-processing procedures to obtain the 3D individual risk iso-surfaces. It is believed that such technique will not only be limited to risk analysis at ground level, but also be extended into aerial, submarine, or space risk analyses in the near future.

  19. A multiphysics and multiscale model for low frequency electromagnetic direct-chill casting

    NASA Astrophysics Data System (ADS)

    Košnik, N.; Guštin, A. Z.; Mavrič, B.; Šarler, B.

    2016-03-01

    Simulation and control of macrosegregation, deformation and grain size in low frequency electromagnetic (EM) direct-chill casting (LFEMC) is important for downstream processing. Respectively, a multiphysics and multiscale model is developed for solution of Lorentz force, temperature, velocity, concentration, deformation and grain structure of LFEMC processed aluminum alloys, with focus on axisymmetric billets. The mixture equations with lever rule, linearized phase diagram, and stationary thermoelastic solid phase are assumed, together with EM induction equation for the field imposed by the coil. Explicit diffuse approximate meshless solution procedure [1] is used for solving the EM field, and the explicit local radial basis function collocation method [2] is used for solving the coupled transport phenomena and thermomechanics fields. Pressure-velocity coupling is performed by the fractional step method [3]. The point automata method with modified KGT model is used to estimate the grain structure [4] in a post-processing mode. Thermal, mechanical, EM and grain structure outcomes of the model are demonstrated. A systematic study of the complicated influences of the process parameters can be investigated by the model, including intensity and frequency of the electromagnetic field. The meshless solution framework, with the implemented simplest physical models, will be further extended by including more sophisticated microsegregation and grain structure models, as well as a more realistic solid and solid-liquid phase rheology.

  20. Strategies for gallium removal after focused ion beam patterning of ferroelectric oxide nanostructures

    NASA Astrophysics Data System (ADS)

    Schilling, A.; Adams, T.; Bowman, R. M.; Gregg, J. M.

    2007-01-01

    As part of a study into the properties of ferroelectric single crystals at nanoscale dimensions, the effects that focused ion beam (FIB) processing can have, in terms of structural damage and ion implantation, on perovskite oxide materials has been examined, and a post-processing procedure developed to remove such effects. Single crystal material of the perovskite ferroelectric barium titanate (BaTiO3) has been patterned into thin film lamellae structures using a FIB microscope. Previous work had shown that FIB patterning induced gallium impregnation and associated creation of amorphous layers in a surface region of the single crystal material some 20 nm thick, but that both recrystallization and expulsion of gallium could be achieved through thermal annealing in air. Here we confirm this observation, but find that thermally induced gallium expulsion is associated with the formation of gallium-rich platelets on the surface of the annealed material. These platelets are thought to be gallium oxide. Etching using nitric and hydrochloric acids had no effect on the gallium-rich platelets. Effective platelet removal involved thermal annealing at 700 °C for 1 h in a vacuum followed by 1 h in oxygen, and then a post-annealing low-power plasma clean in an Ar/O atmosphere. Similar processing is likely to be necessary for the full recovery of post FIB-milled nanostructures in oxide ceramic systems in general.

  1. Analysis of the Space Propulsion System Problem Using RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    diego mandelli; curtis smith; cristian rabiti

    This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less

  2. Fetal functional imaging portrays heterogeneous development of emerging human brain networks

    PubMed Central

    Jakab, András; Schwartz, Ernst; Kasprian, Gregor; Gruber, Gerlinde M.; Prayer, Daniela; Schöpf, Veronika; Langs, Georg

    2014-01-01

    The functional connectivity architecture of the adult human brain enables complex cognitive processes, and exhibits a remarkably complex structure shared across individuals. We are only beginning to understand its heterogeneous structure, ranging from a strongly hierarchical organization in sensorimotor areas to widely distributed networks in areas such as the parieto-frontal cortex. Our study relied on the functional magnetic resonance imaging (fMRI) data of 32 fetuses with no detectable morphological abnormalities. After adapting functional magnetic resonance acquisition, motion correction, and nuisance signal reduction procedures of resting-state functional data analysis to fetuses, we extracted neural activity information for major cortical and subcortical structures. Resting fMRI networks were observed for increasing regional functional connectivity from 21st to 38th gestational weeks (GWs) with a network-based statistical inference approach. The overall connectivity network, short range, and interhemispheric connections showed sigmoid expansion curve peaking at the 26–29 GW. In contrast, long-range connections exhibited linear increase with no periods of peaking development. Region-specific increase of functional signal synchrony followed a sequence of occipital (peak: 24.8 GW), temporal (peak: 26 GW), frontal (peak: 26.4 GW), and parietal expansion (peak: 27.5 GW). We successfully adapted functional neuroimaging and image post-processing approaches to correlate macroscopical scale activations in the fetal brain with gestational age. This in vivo study reflects the fact that the mid-fetal period hosts events that cause the architecture of the brain circuitry to mature, which presumably manifests in increasing strength of intra- and interhemispheric functional macro connectivity. PMID:25374531

  3. Fetal functional imaging portrays heterogeneous development of emerging human brain networks.

    PubMed

    Jakab, András; Schwartz, Ernst; Kasprian, Gregor; Gruber, Gerlinde M; Prayer, Daniela; Schöpf, Veronika; Langs, Georg

    2014-01-01

    The functional connectivity architecture of the adult human brain enables complex cognitive processes, and exhibits a remarkably complex structure shared across individuals. We are only beginning to understand its heterogeneous structure, ranging from a strongly hierarchical organization in sensorimotor areas to widely distributed networks in areas such as the parieto-frontal cortex. Our study relied on the functional magnetic resonance imaging (fMRI) data of 32 fetuses with no detectable morphological abnormalities. After adapting functional magnetic resonance acquisition, motion correction, and nuisance signal reduction procedures of resting-state functional data analysis to fetuses, we extracted neural activity information for major cortical and subcortical structures. Resting fMRI networks were observed for increasing regional functional connectivity from 21st to 38th gestational weeks (GWs) with a network-based statistical inference approach. The overall connectivity network, short range, and interhemispheric connections showed sigmoid expansion curve peaking at the 26-29 GW. In contrast, long-range connections exhibited linear increase with no periods of peaking development. Region-specific increase of functional signal synchrony followed a sequence of occipital (peak: 24.8 GW), temporal (peak: 26 GW), frontal (peak: 26.4 GW), and parietal expansion (peak: 27.5 GW). We successfully adapted functional neuroimaging and image post-processing approaches to correlate macroscopical scale activations in the fetal brain with gestational age. This in vivo study reflects the fact that the mid-fetal period hosts events that cause the architecture of the brain circuitry to mature, which presumably manifests in increasing strength of intra- and interhemispheric functional macro connectivity.

  4. Development and Implementation of a Generic Analysis Template for Structural-Thermal-Optical-Performance Modeling

    NASA Technical Reports Server (NTRS)

    Scola, Salvatore; Stavely, Rebecca; Jackson, Trevor; Boyer, Charlie; Osmundsen, Jim; Turczynski, Craig; Stimson, Chad

    2016-01-01

    Performance-related effects of system level temperature changes can be a key consideration in the design of many types of optical instruments. This is especially true for space-based imagers, which may require complex thermal control systems to maintain alignment of the optical components. Structural-Thermal-Optical-Performance (STOP) analysis is a multi-disciplinary process that can be used to assess the performance of these optical systems when subjected to the expected design environment. This type of analysis can be very time consuming, which makes it difficult to use as a trade study tool early in the project life cycle. In many cases, only one or two iterations can be performed over the course of a project. This limits the design space to best practices since it may be too difficult, or take too long, to test new concepts analytically. In order to overcome this challenge, automation, and a standard procedure for performing these studies is essential. A methodology was developed within the framework of the Comet software tool that captures the basic inputs, outputs, and processes used in most STOP analyses. This resulted in a generic, reusable analysis template that can be used for design trades for a variety of optical systems. The template captures much of the upfront setup such as meshing, boundary conditions, data transfer, naming conventions, and post-processing, and therefore saves time for each subsequent project. A description of the methodology and the analysis template is presented, and results are described for a simple telescope optical system.

  5. Development and implementation of a generic analysis template for structural-thermal-optical-performance modeling

    NASA Astrophysics Data System (ADS)

    Scola, Salvatore; Stavely, Rebecca; Jackson, Trevor; Boyer, Charlie; Osmundsen, Jim; Turczynski, Craig; Stimson, Chad

    2016-09-01

    Performance-related effects of system level temperature changes can be a key consideration in the design of many types of optical instruments. This is especially true for space-based imagers, which may require complex thermal control systems to maintain alignment of the optical components. Structural-Thermal-Optical-Performance (STOP) analysis is a multi-disciplinary process that can be used to assess the performance of these optical systems when subjected to the expected design environment. This type of analysis can be very time consuming, which makes it difficult to use as a trade study tool early in the project life cycle. In many cases, only one or two iterations can be performed over the course of a project. This limits the design space to best practices since it may be too difficult, or take too long, to test new concepts analytically. In order to overcome this challenge, automation, and a standard procedure for performing these studies is essential. A methodology was developed within the framework of the Comet software tool that captures the basic inputs, outputs, and processes used in most STOP analyses. This resulted in a generic, reusable analysis template that can be used for design trades for a variety of optical systems. The template captures much of the upfront setup such as meshing, boundary conditions, data transfer, naming conventions, and post-processing, and therefore saves time for each subsequent project. A description of the methodology and the analysis template is presented, and results are described for a simple telescope optical system.

  6. 4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers

    NASA Astrophysics Data System (ADS)

    Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Shen, Liangbo; Todorich, Bozho; Shieh, Christine; Kuo, Anthony; Toth, Cynthia; Izatt, Joseph A.

    2016-03-01

    Ophthalmic surgeons manipulate micron-scale tissues using stereopsis through an operating microscope and instrument shadowing for depth perception. While ophthalmic microsurgery has benefitted from rapid advances in instrumentation and techniques, the basic principles of the stereo operating microscope have not changed since the 1930's. Optical Coherence Tomography (OCT) has revolutionized ophthalmic imaging and is now the gold standard for preoperative and postoperative evaluation of most retinal and many corneal procedures. We and others have developed initial microscope-integrated OCT (MIOCT) systems for concurrent OCT and operating microscope imaging, but these are limited to 2D real-time imaging and require offline post-processing for 3D rendering and visualization. Our previously presented 4D MIOCT system can record and display the 3D surgical field stereoscopically through the microscope oculars using a dual-channel heads-up display (HUD) at up to 10 micron-scale volumes per second. In this work, we show that 4D MIOCT guidance improves the accuracy of depth-based microsurgical maneuvers (with statistical significance) in mock surgery trials in a wet lab environment. Additionally, 4D MIOCT was successfully performed in 38/45 (84%) posterior and 14/14 (100%) anterior eye human surgeries, and revealed previously unrecognized lesions that were invisible through the operating microscope. These lesions, such as residual and potentially damaging retinal deformation during pathologic membrane peeling, were visualized in real-time by the surgeon. Our integrated system provides an enhanced 4D surgical visualization platform that can improve current ophthalmic surgical practice and may help develop and refine future microsurgical techniques.

  7. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946

  8. Development and validation of a new guidance device for lateral approach stereotactic breast biopsy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, K.; Kornecki, A.; Bax, J.

    2009-06-15

    Stereotactic breast biopsy (SBB) is the gold standard for minimally invasive breast cancer diagnosis. Current systems rely on one of two methods for needle insertion: A vertical approach (perpendicular to the breast compression plate) or a lateral approach (parallel to the compression plate). While the vertical approach is more frequently used, it is not feasible in patients with thin breasts (<3 cm thick after compression) or with superficial lesions. Further, existing SBB guidance hardware provides at most one degree of rotational freedom in the needle trajectory, and as such requires a separate skin incision for each biopsy target. The authorsmore » present a new design of lateral guidance device for SBB, which addresses the limitations of the vertical approach and provides improvements over the existing lateral guidance hardware. Specifically, the new device provides (1) an adjustable rigid needle support to minimize needle deflection within the breast and (2) an additional degree of rotational freedom in the needle trajectory, allowing the radiologist to sample multiple targets through a single skin incision. This device was compared to a commercial lateral guidance device in a series of phantom experiments. Needle placement error using each device was measured in agar phantoms for needle insertions at lateral depths of 2 and 5 cm. The biopsy success rate for each device was then estimated by performing biopsy procedures in commercial SBB phantoms. SBB performed with the new lateral guidance device provided reduced needle placement error relative to the commercial lateral guidance device (0.89{+-}0.22 vs 1.75{+-}0.35 mm for targets at 2 cm depth; 1.94{+-}0.20 vs 3.21{+-}0.31 mm for targets at 5 cm depth). The new lateral guidance device also provided improved biopsy accuracy in SBB procedures compared to the commercial lateral guidance device (100% vs 58% success rate). Finally, experiments were performed to demonstrate that the new device can accurately sample lesions within thin breast phantoms and multiple lesions through a single incision point. This device can be incorporated directly into the clinical SBB procedural workflow, with no additional electrical hardware, software, postprocessing, or image analysis.« less

  9. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

  10. Stent deployment protocol for optimized real-time visualization during endovascular neurosurgery.

    PubMed

    Silva, Michael A; See, Alfred P; Dasenbrock, Hormuzdiyar H; Ashour, Ramsey; Khandelwal, Priyank; Patel, Nirav J; Frerichs, Kai U; Aziz-Sultan, Mohammad A

    2017-05-01

    Successful application of endovascular neurosurgery depends on high-quality imaging to define the pathology and the devices as they are being deployed. This is especially challenging in the treatment of complex cases, particularly in proximity to the skull base or in patients who have undergone prior endovascular treatment. The authors sought to optimize real-time image guidance using a simple algorithm that can be applied to any existing fluoroscopy system. Exposure management (exposure level, pulse management) and image post-processing parameters (edge enhancement) were modified from traditional fluoroscopy to improve visualization of device position and material density during deployment. Examples include the deployment of coils in small aneurysms, coils in giant aneurysms, the Pipeline embolization device (PED), the Woven EndoBridge (WEB) device, and carotid artery stents. The authors report on the development of the protocol and their experience using representative cases. The stent deployment protocol is an image capture and post-processing algorithm that can be applied to existing fluoroscopy systems to improve real-time visualization of device deployment without hardware modifications. Improved image guidance facilitates aneurysm coil packing and proper positioning and deployment of carotid artery stents, flow diverters, and the WEB device, especially in the context of complex anatomy and an obscured field of view.

  11. Time-resolved fast-neutron radiography of air-water two-phase flows in a rectangular channel by an improved detection system

    NASA Astrophysics Data System (ADS)

    Zboray, Robert; Dangendorf, Volker; Mor, Ilan; Bromberger, Benjamin; Tittelmeier, Kai

    2015-07-01

    In a previous work, we have demonstrated the feasibility of high-frame-rate, fast-neutron radiography of generic air-water two-phase flows in a 1.5 cm thick, rectangular flow channel. The experiments have been carried out at the high-intensity, white-beam facility of the Physikalisch-Technische Bundesanstalt, Germany, using an multi-frame, time-resolved detector developed for fast neutron resonance radiography. The results were however not fully optimal and therefore we have decided to modify the detector and optimize it for the given application, which is described in the present work. Furthermore, we managed to improve the image post-processing methodology and the noise suppression. Using the tailored detector and the improved post-processing, significant increase in the image quality and an order of magnitude lower exposure times, down to 3.33 ms, have been achieved with minimized motion artifacts. Similar to the previous study, different two-phase flow regimes such as bubbly slug and churn flows have been examined. The enhanced imaging quality enables an improved prediction of two-phase flow parameters like the instantaneous volumetric gas fraction, bubble size, and bubble velocities. Instantaneous velocity fields around the gas enclosures can also be more robustly predicted using optical flow methods as previously.

  12. Post-processing open-source software for the CBCT monitoring of periapical lesions healing following endodontic treatment: technical report of two cases.

    PubMed

    Villoria, Eduardo M; Lenzi, Antônio R; Soares, Rodrigo V; Souki, Bernardo Q; Sigurdsson, Asgeir; Marques, Alexandre P; Fidel, Sandra R

    2017-01-01

    To describe the use of open-source software for the post-processing of CBCT imaging for the assessment of periapical lesions development after endodontic treatment. CBCT scans were retrieved from endodontic records of two patients. Three-dimensional virtual models, voxel counting, volumetric measurement (mm 3 ) and mean intensity of the periapical lesion were performed with ITK-SNAP v. 3.0 software. Three-dimensional models of the lesions were aligned and overlapped through the MeshLab software, which performed an automatic recording of the anatomical structures, based on the best fit. Qualitative and quantitative analyses of the changes in lesions size after treatment were performed with the 3DMeshMetric software. The ITK-SNAP v. 3.0 showed the smaller value corresponding to the voxel count and the volume of the lesion segmented in yellow, indicating reduction in volume of the lesion after the treatment. A higher value of the mean intensity of the segmented image in yellow was also observed, which suggested new bone formation. Colour mapping and "point value" tool allowed the visualization of the reduction of periapical lesions in several regions. Researchers and clinicians in the monitoring of endodontic periapical lesions have the opportunity to use open-source software.

  13. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    NASA Technical Reports Server (NTRS)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  14. Electromagnetic field strength prediction in an urban environment: A useful tool for the planning of LMSS

    NASA Technical Reports Server (NTRS)

    Vandooren, G. A. J.; Herben, M. H. A. J.; Brussaard, G.; Sforza, M.; Poiaresbaptista, J. P. V.

    1993-01-01

    A model for the prediction of the electromagnetic field strength in an urban environment is presented. The ray model, that is based on the Uniform Theory of Diffraction (UTD), includes effects of the non-perfect conductivity of the obstacles and their surface roughness. The urban environment is transformed into a list of standardized obstacles that have various shapes and material properties. The model is capable of accurately predicting the field strength in the urban environment by calculating different types of wave contributions such as reflected, edge and corner diffracted waves, and combinations thereof. Also, antenna weight functions are introduced to simulate the spatial filtering by the mobile antenna. Communication channel parameters such as signal fading, time delay profiles, Doppler shifts and delay-Doppler spectra can be derived from the ray-tracing procedure using post-processing routines. The model has been tested against results from scaled measurements at 50 GHz and proves to be accurate.

  15. Identification of overlapping communities and their hierarchy by locally calculating community-changing resolution levels

    NASA Astrophysics Data System (ADS)

    Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen

    2011-01-01

    We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.

  16. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  17. Automated Reconstruction of Three-Dimensional Fish Motion, Forces, and Torques

    PubMed Central

    Voesenek, Cees J.; Pieters, Remco P. M.; van Leeuwen, Johan L.

    2016-01-01

    Fish can move freely through the water column and make complex three-dimensional motions to explore their environment, escape or feed. Nevertheless, the majority of swimming studies is currently limited to two-dimensional analyses. Accurate experimental quantification of changes in body shape, position and orientation (swimming kinematics) in three dimensions is therefore essential to advance biomechanical research of fish swimming. Here, we present a validated method that automatically tracks a swimming fish in three dimensions from multi-camera high-speed video. We use an optimisation procedure to fit a parameterised, morphology-based fish model to each set of video images. This results in a time sequence of position, orientation and body curvature. We post-process this data to derive additional kinematic parameters (e.g. velocities, accelerations) and propose an inverse-dynamics method to compute the resultant forces and torques during swimming. The presented method for quantifying 3D fish motion paves the way for future analyses of swimming biomechanics. PMID:26752597

  18. Amateur Radio Flash Mob: Citizen Radio Science Response to a Solar Eclipse

    NASA Astrophysics Data System (ADS)

    Hirsch, M.; Frissell, N. A.

    2017-12-01

    Over a decade's worth of scientifically useful data from radio amateurs worldwide is publicly available, with momentum building in science exploitation of this data. For the 2017 solar eclipse, a "flash mob" of radio amateurs were organized in the form of a contest. Licensed radio amateurs transmitted on specific frequency bands, with awards given for a new generation of raw data collection allowing sophisticated post-processing of raw ADC data, to extract quantities such as Doppler shift due to ionospheric lifting for example. We discuss transitioning science priorities to gamified scoring procedures incentivizing the public to submit the highest quality and quantity of archival raw radio science data. The choices of frequency bands to encourage in the face of regulatory limitations is discussed. An update on initial field experiments using wideband experimental modulation specially licensed yet receivable by radio amateurs for high spatiotemporal resolution imaging of the ionosphere is given. The cost of this equipment is less than $500 per node, comparing favorably to legacy oblique ionospheric sounding networks.

  19. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  20. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  1. Real-time monitoring and massive inversion of source parameters of very long period seismic signals: An application to Stromboli Volcano, Italy

    USGS Publications Warehouse

    Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.

    2006-01-01

    We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.

  2. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  3. A locally conservative stabilized continuous Galerkin finite element method for two-phase flow in poroelastic subsurfaces

    NASA Astrophysics Data System (ADS)

    Deng, Q.; Ginting, V.; McCaskill, B.; Torsu, P.

    2017-10-01

    We study the application of a stabilized continuous Galerkin finite element method (CGFEM) in the simulation of multiphase flow in poroelastic subsurfaces. The system involves a nonlinear coupling between the fluid pressure, subsurface's deformation, and the fluid phase saturation, and as such, we represent this coupling through an iterative procedure. Spatial discretization of the poroelastic system employs the standard linear finite element in combination with a numerical diffusion term to maintain stability of the algebraic system. Furthermore, direct calculation of the normal velocities from pressure and deformation does not entail a locally conservative field. To alleviate this drawback, we propose an element based post-processing technique through which local conservation can be established. The performance of the method is validated through several examples illustrating the convergence of the method, the effectivity of the stabilization term, and the ability to achieve locally conservative normal velocities. Finally, the efficacy of the method is demonstrated through simulations of realistic multiphase flow in poroelastic subsurfaces.

  4. Acoustic streaming: an arbitrary Lagrangian-Eulerian perspective.

    PubMed

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-08-25

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid-structure interaction problems in microacoustofluidic devices. After the formulation's exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches.

  5. A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling.

    PubMed

    Cordier, Nicolas; Delingette, Herve; Ayache, Nicholas

    2016-04-01

    In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.

  6. Security of quantum key distribution with multiphoton components

    PubMed Central

    Yin, Hua-Lei; Fu, Yao; Mao, Yingqiu; Chen, Zeng-Bing

    2016-01-01

    Most qubit-based quantum key distribution (QKD) protocols extract the secure key merely from single-photon component of the attenuated lasers. However, with the Scarani-Acin-Ribordy-Gisin 2004 (SARG04) QKD protocol, the unconditionally secure key can be extracted from the two-photon component by modifying the classical post-processing procedure in the BB84 protocol. Employing the merits of SARG04 QKD protocol and six-state preparation, one can extract secure key from the components of single photon up to four photons. In this paper, we provide the exact relations between the secure key rate and the bit error rate in a six-state SARG04 protocol with single-photon, two-photon, three-photon, and four-photon sources. By restricting the mutual information between the phase error and bit error, we obtain a higher secure bit error rate threshold of the multiphoton components than previous works. Besides, we compare the performances of the six-state SARG04 with other prepare-and-measure QKD protocols using decoy states. PMID:27383014

  7. LARC: computer codes for Lagrangian analysis of stress-gauge data to obtain decomposition rates through correlation to thermodynamic variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, A.B.; Wackerle, J.

    1983-07-01

    This report describes a package of five computer codes for analyzing stress-gauge data from shock-wave experiments on reactive materials. The aim of the analysis is to obtain rate laws from experiment. A Lagrangian analysis of the stress records, performed by program LANAL, provides flow histories of particle velocity, density, and energy. Three postprocessing programs, LOOKIT, LOOK1, and LOOK2, are included in the package of codes for producing graphical output of the results of LANAL. Program RATE uses the flow histories in conjunction with an equation of state to calculate reaction-rate histories. RATE can be programmed to examine correlations between themore » rate histories and thermodynamic variables. Observed correlations can be incorporated into an appropriately parameterized rate law. Program RATE determines the values of these parameters that best reproduce the observed rate histories. The procedure is illustrated with a sample problem.« less

  8. Acoustic streaming: an arbitrary Lagrangian–Eulerian perspective

    PubMed Central

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-01-01

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid–structure interaction problems in microacoustofluidic devices. After the formulation’s exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches. PMID:29051631

  9. Coupling Between CTH and LS-DYNA for Thermal Postprocessing: Application to Propellant Cookoff From a Residual Penetrator

    DTIC Science & Technology

    2006-09-01

    1 1 θα θ ∇= ∂ ∂ t ; 1Ω∈x , 0>t (7) and 2 2 2 2 θα θ ∇= ∂ ∂ t ; 2Ω∈x , 0>t , (8) 12 in which α1 and α2 are the thermal diffusivities of steel and...M30A1, respectively. These are defined by p11 1 1 cρ κ α = (9) and p22 2 2 cρ κ α = . (10) Here, κ1 ad κ2 are the thermal conductivities...Coupling Between CTH and LS-DYNA for Thermal Postprocessing: Application to Propellant Cookoff From a Residual Penetrator by Martin N

  10. Superconducting MgB2 films via precursor postprocessing approach

    NASA Astrophysics Data System (ADS)

    Paranthaman, M.; Cantoni, C.; Zhai, H. Y.; Christen, H. M.; Aytug, T.; Sathyamurthy, S.; Specht, E. D.; Thompson, J. R.; Lowndes, D. H.; Kerchner, H. R.; Christen, D. K.

    2001-06-01

    Superconducting MgB2 films with Tc=38.6 K were prepared using a precursor-deposition, ex situ postprocessing approach. Precursor films of boron, ˜0.5 μm thick, were deposited onto Al2O3 (102) substrates by electron-beam evaporation; a postanneal at 890 °C in the presence of bulk MgB2 and Mg metal produced highly crystalline MgB2 films. X-ray diffraction indicated that the films exhibit some degree of c-axis alignment, but are randomly oriented in plane. Transport current measurements of the superconducting properties show high values of the critical current density and yield an irreversibility line that exceeds that determined by magnetic measurements on bulk polycrystalline materials.

  11. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  12. A review of materials engineering in silicon-based optical fibres

    NASA Astrophysics Data System (ADS)

    Healy, Noel; Gibson, Ursula; Peacock, Anna C.

    2018-02-01

    Semiconductor optical fibre technologies have grown rapidly in the last decade and there are now a range of production and post-processing techniques that allow for a vast degree of control over the core material's optoelectronic properties. These methodologies and the unique optical fibre geometry provide an exciting platform for materials engineering and fibres can now be produced with single crystal cores, low optical losses, tunable strain, and inscribable phase composition. This review discusses the state-of-the-art regarding the production of silicon optical fibres in amorphous and crystalline form and then looks at the post-processing techniques and the improved material quality and new functionality that they afford.

  13. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  14. Operationalization of Prediction, Hindcast, and Evaluation Systems using the Freie Univ Evaluation System Framework (Freva) incl. a Showcase in Decadal Climate Prediction

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Schartner, Thomas; Ulbrich, Uwe; Cubasch, Ulrich

    2017-04-01

    Operationalization processes are important for Weather and Climate Services. Complex data and work flows need to be combined fast to fulfill the needs of service centers. Standards in data and software formats help in automatic solutions. In this study we show a software solution in between hindcasts, forecasts, and validation to be operationalized. Freva (see below) structures data and evaluation procedures and can easily be monitored. Especially in the development process of operationalized services, Freva supports scientists and project partners. The showcase of the decadal climate prediction project MiKlip (fona-miklip.de) shows such a complex development process. Different predictions, scientists input, tasks, and time evolving adjustments need to be combined to host precise climate informations in a web environment without losing track of its evolution. The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science. Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitation of the provision and usage of tools and climate data automatically increases the number of scientists working with the data sets and identifying discrepancies. The integrated webshell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gateto the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database ofthe user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Therefore, plugged-in tools benefit from transparency and reproducibility. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  15. The Galics Project: Virtual Galaxy: from Cosmological N-body Simulations

    NASA Astrophysics Data System (ADS)

    Guiderdoni, B.

    The GalICS project develops extensive semi-analytic post-processing of large cosmological simulations to describe hierarchical galaxy formation. The multiwavelength statistical properties of high-redshift and local galaxies are predicted within the large-scale structures. The fake catalogs and mock images that are generated from the outputs are used for the analysis and preparation of deep surveys. The whole set of results is now available in an on-line database that can be easily queried. The GalICS project represents a first step towards a 'Virtual Observatory of virtual galaxies'.

  16. Quality assessment and control of finite element solutions

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Babuska, Ivo

    1987-01-01

    Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.

  17. Development of a short-term irradiance prediction system using post-processing tools on WRF-ARW meteorological forecasts in Spain

    NASA Astrophysics Data System (ADS)

    Rincón, A.; Jorba, O.; Baldasano, J. M.

    2010-09-01

    The increased contribution of solar energy in power generation sources requires an accurate estimation of surface solar irradiance conditioned by geographical, temporal and meteorological conditions. The knowledge of the variability of these factors is essential to estimate the expected energy production and therefore help stabilizing the electricity grid and increase the reliability of available solar energy. The use of numerical meteorological models in combination with statistical post-processing tools may have the potential to satisfy the requirements for short-term forecasting of solar irradiance for up to several days ahead and its application in solar devices. In this contribution, we present an assessment of a short-term irradiance prediction system based on the WRF-ARW mesoscale meteorological model (Skamarock et al., 2005) and several post-processing tools in order to improve the overall skills of the system in an annual simulation of the year 2004 in Spain. The WRF-ARW model is applied with 4 km x 4 km horizontal resolution and 38 vertical layers over the Iberian Peninsula. The hourly model irradiance is evaluated against more than 90 surface stations. The stations are used to assess the temporal and spatial fluctuations and trends of the system evaluating three different post-processes: Model Output Statistics technique (MOS; Glahn and Lowry, 1972), Recursive statistical method (REC; Boi, 2004) and Kalman Filter Predictor (KFP, Bozic, 1994; Roeger et al., 2003). A first evaluation of the system without post-processing tools shows an overestimation of the surface irradiance, due to the lack of atmospheric absorbers attenuation different than clouds not included in the meteorological model. This produces an annual BIAS of 16 W m-2 h-1, annual RMSE of 106 W m-2 h-1 and annual NMAE of 42%. The largest errors are observed in spring and summer, reaching RMSE of 350 W m-2 h-1. Results using Kalman Filter Predictor show a reduction of 8% of RMSE, 83% of BIAS, and NMAE decreases down to 32%. The REC method shows a reduction of 6% of RMSE, 79% of BIAS, and NMAE decreases down to 28%. When comparing stations at different altitudes, the overestimation is enhanced at coastal stations (less than 200m) up to 900 W m-2 h-1. The results allow us to analyze strengths and drawbacks of the irradiance prediction system and its application in the estimation of energy production from photovoltaic system cells. References Boi, P.: A statistical method for forecasting extreme daily temperatures using ECMWF 2-m temperatures and ground station measurements, Meteorol. Appl., 11, 245-251, 2004. Bozic, S.: Digital and Kalman filtering, John Wiley, Hoboken, New Jersey, 2nd edn., 1994. Glahn, H. and Lowry, D.: The use of Model Output Statistics (MOS) in Objective Weather Forecasting, Applied Meteorology, 11, 1203-1211, 1972. Roeger, C., Stull, R., McClung, D., Hacker, J., Deng, X., and Modzelewski, H.: Verification of Mesoscale Numerical Weather Forecasts in Mountainous Terrain for Application to Avalanche Prediction, Weather and forecasting, 18, 1140-1160, 2003. Skamarock, W., Klemp, J., Dudhia, J., Gill, D., Barker, D. M., Wang, W., and Powers, J. G.: A Description of the Advanced Research WRF Version 2, Tech. Rep. NCAR/TN-468+STR, NCAR Technical note, 2005.

  18. The Development and Hover Test Application of a Projection Moire Interferometry Blade Displacement Measurement System

    NASA Technical Reports Server (NTRS)

    Sekula, Martin K.

    2012-01-01

    Projection moir interferometry (PMI) was employed to measure blade deflections during a hover test of a generic model-scale rotor in the NASA Langley 14x22 subsonic wind tunnel s hover facility. PMI was one of several optical measurement techniques tasked to acquire deflection and flow visualization data for a rotor at several distinct heights above a ground plane. Two of the main objectives of this test were to demonstrate that multiple optical measurement techniques can be used simultaneously to acquire data and to identify and address deficiencies in the techniques. Several PMI-specific technical challenges needed to be addressed during the test and in post-processing of the data. These challenges included developing an efficient and accurate calibration method for an extremely large (65 inch) height range; automating the analysis of the large amount of data acquired during the test; and developing a method to determinate the absolute displacement of rotor blades without a required anchor point measurement. The results indicate that the use of a single-camera/single-projector approach for the large height range reduced the accuracy of the PMI system compared to PMI systems designed for smaller height ranges. The lack of the anchor point measurement (due to a technical issue with one of the other measurement techniques) limited the ability of the PMI system to correctly measure blade displacements to only one of the three rotor heights tested. The new calibration technique reduced the data required by 80 percent while new post-processing algorithms successfully automated the process of locating rotor blades in images, determining the blade quarter chord location, and calculating the blade root and blade tip heights above the ground plane.

  19. Enhancing the Use of Argos Satellite Data for Home Range and Long Distance Migration Studies of Marine Animals

    PubMed Central

    Hoenner, Xavier; Whiting, Scott D.; Hindell, Mark A.; McMahon, Clive R.

    2012-01-01

    Accurately quantifying animals’ spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68th percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student’s t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2±2.4 km; mean home range overlap and error ratio  = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes. PMID:22808241

  20. FEM modeling and histological analyses on thermal damage induced in facial skin resurfacing procedure with different CO2 laser pulse duration

    NASA Astrophysics Data System (ADS)

    Rossi, Francesca; Zingoni, Tiziano; Di Cicco, Emiliano; Manetti, Leonardo; Pini, Roberto; Fortuna, Damiano

    2011-07-01

    Laser light is nowadays routinely used in the aesthetic treatments of facial skin, such as in laser rejuvenation, scar removal etc. The induced thermal damage may be varied by setting different laser parameters, in order to obtain a particular aesthetic result. In this work, it is proposed a theoretical study on the induced thermal damage in the deep tissue, by considering different laser pulse duration. The study is based on the Finite Element Method (FEM): a bidimensional model of the facial skin is depicted in axial symmetry, considering the different skin structures and their different optical and thermal parameters; the conversion of laser light into thermal energy is modeled by the bio-heat equation. The light source is a CO2 laser, with different pulse durations. The model enabled to study the thermal damage induced into the skin, by calculating the Arrhenius integral. The post-processing results enabled to study in space and time the temperature dynamics induced in the facial skin, to study the eventual cumulative effects of subsequent laser pulses and to optimize the procedure for applications in dermatological surgery. The calculated data where then validated in an experimental measurement session, performed in a sheep animal model. Histological analyses were performed on the treated tissues, evidencing the spatial distribution and the entity of the thermal damage in the collageneous tissue. Modeling and experimental results were in good agreement, and they were used to design a new optimized laser based skin resurfacing procedure.

  1. Considerations for point-of-care diagnostics: evaluation of acridine orange staining and postprocessing methods for a three-part leukocyte differential test

    NASA Astrophysics Data System (ADS)

    Powless, Amy J.; Conley, Roxanna J.; Freeman, Karan A.; Muldoon, Timothy J.

    2017-03-01

    There exists a broad range of techniques that can be used to classify and count white blood cells in a point-of-care (POC) three-part leukocyte differential test. Improvements in lenses, light sources, and cameras for image-based POC systems have renewed interest in acridine orange (AO) as a contrast agent, whereby subpopulations of leukocytes can be differentiated by colorimetric analysis of AO fluorescence emission. We evaluated the effect on test accuracy using different AO staining and postprocessing methods in the context of an image-based POC colorimetric cell classification scheme. Thirty blood specimens were measured for percent cell counts using our POC system and a conventional hematology analyzer for comparison. Controlling the AO concentration used during whole-blood staining, the incubation time with AO, and the colorimetric ratios among the three population of leukocytes yielded a percent deviation of 0.706%, -1.534%, and -0.645% for the lymphocytes, monocytes, and granulocytes, respectively. Overall, we demonstrated that a redshift in AO fluorescence was observed at elevated AO concentrations, which lead to reproducible inaccuracy of cell counts. This study demonstrates there is a need for a strict control of the AO staining and postprocessing methods to improve test accuracy in these POC systems.

  2. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    NASA Astrophysics Data System (ADS)

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system.

  3. Skill of Global Raw and Postprocessed Ensemble Predictions of Rainfall over Northern Tropical Africa

    NASA Astrophysics Data System (ADS)

    Vogel, Peter; Knippertz, Peter; Fink, Andreas H.; Schlueter, Andreas; Gneiting, Tilmann

    2018-04-01

    Accumulated precipitation forecasts are of high socioeconomic importance for agriculturally dominated societies in northern tropical Africa. In this study, we analyze the performance of nine operational global ensemble prediction systems (EPSs) relative to climatology-based forecasts for 1 to 5-day accumulated precipitation based on the monsoon seasons 2007-2014 for three regions within northern tropical Africa. To assess the full potential of raw ensemble forecasts across spatial scales, we apply state-of-the-art statistical postprocessing methods in form of Bayesian Model Averaging (BMA) and Ensemble Model Output Statistics (EMOS), and verify against station and spatially aggregated, satellite-based gridded observations. Raw ensemble forecasts are uncalibrated, unreliable, and underperform relative to climatology, independently of region, accumulation time, monsoon season, and ensemble. Differences between raw ensemble and climatological forecasts are large, and partly stem from poor prediction for low precipitation amounts. BMA and EMOS postprocessed forecasts are calibrated, reliable, and strongly improve on the raw ensembles, but - somewhat disappointingly - typically do not outperform climatology. Most EPSs exhibit slight improvements over the period 2007-2014, but overall have little added value compared to climatology. We suspect that the parametrization of convection is a potential cause for the sobering lack of ensemble forecast skill in a region dominated by mesoscale convective systems.

  4. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    PubMed Central

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system. PMID:28106129

  5. Post-processing optimization of electrospun submicron poly(3-hydroxybutyrate) fibers to obtain continuous films of interest in food packaging applications.

    PubMed

    Cherpinski, Adriane; Torres-Giner, Sergio; Cabedo, Luis; Lagaron, Jose M

    2017-10-01

    Polyhydroxyalkanoates (PHAs) are one of the most researched family of biodegradable polymers based on renewable materials due to their thermoplastic nature and moisture resistance. The present study was targeted to investigate the preparation and characterization of poly(3-hydroxybutyrate) (PHB) films obtained through the electrospinning technique. To convert them into continuous films and then to increase their application interest in packaging, the electrospun fiber mats were subsequently post-processed by different physical treatments. Thus, the effect of annealing time and cooling method on morphology, molecular order, thermal, optical, mechanical, and barrier properties of the electrospun submicron PHB fibers was studied. Annealing at 160°C, well below the homopolyester melting point, was found to be the minimum temperature at which homogeneous transparent films were produced. The film samples that were cooled slowly after annealing showed the lowest permeability to oxygen, water vapor, and limonene. The optimally post-processed electrospun PHB fibers exhibited similar rigidity to conventional compression-molded PHA films, but with enhanced elongation at break and toughness. Films made by this electrospinning technique have many potential applications, such as in the design of barrier layers, adhesive interlayers, and coatings for fiber- and plastic-based food packaging materials.

  6. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates

    PubMed Central

    Zhang, Hao; Li, Xianqi; Park, Jewook; Li, An-Ping

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a “rubber band” model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data. PMID:29362664

  7. Displaying CFD Solution Parameters on Arbitrary Cut Planes

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul

    2008-01-01

    USMC6 is a Fortran 90 computer program for post-processing in support of visualization of flows simulated by computational fluid dynamics (CFD). The name "USMC6" is partly an abbreviation of "TetrUSS - USM3D Solution Cutter," reflecting its origin as a post-processor for use with USM3D - a CFD program that is a component of the Tetrahedral Unstructured Software System and that solves the Navier-Stokes equations on tetrahedral unstructured grids. "Cutter" here refers to a capability to acquire and process solution data on (1) arbitrary planes that cut through grid volumes, or (2) user-selected spheroidal, conical, cylindrical, and/or prismatic domains cut from within grids. Cutting saves time by enabling concentration of post-processing and visualization efforts on smaller solution domains of interest. The user can select from among more than 40 flow functions. The cut planes can be trimmed to circular or rectangular shape. The user specifies cuts and functions in a free-format input file using simple and easy-to-remember keywords. The USMC6 command line is simple enough that the slicing process can readily be embedded in a shell script for assembly-line post-processing. The output of USMC6 is a data file ready for plotting.

  8. In-process and post-process measurements of drill wear for control of the drilling process

    NASA Astrophysics Data System (ADS)

    Liu, Tien-I.; Liu, George; Gao, Zhiyu

    2011-12-01

    Optical inspection was used in this research for the post-process measurements of drill wear. A precision toolmakers" microscope was used. Indirect index, cutting force, is used for in-process drill wear measurements. Using in-process measurements to estimate the drill wear for control purpose can decrease the operation cost and enhance the product quality and safety. The challenge is to correlate the in-process cutting force measurements with the post-process optical inspection of drill wear. To find the most important feature, the energy principle was used in this research. It is necessary to select only the cutting force feature which shows the highest sensitivity to drill wear. The best feature selected is the peak of torque in the drilling process. Neuro-fuzzy systems were used for correlation purposes. The Adaptive-Network-Based Fuzzy Inference System (ANFIS) can construct fuzzy rules with membership functions to generate an input-output pair. A 1x6 ANFIS architecture with product of sigmoid membership functions can in-process measure the drill wear with an error as low as 0.15%. This is extremely important for control of the drilling process. Furthermore, the measurement of drill wear was performed under different drilling conditions. This shows that ANFIS has the capability of generalization.

  9. Nucleosynthesis in Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Stevenson, Taylor Shannon; Viktoria Ohstrom, Eva; Harris, James Austin; Hix, William R.

    2018-01-01

    The nucleosynthesis which occurs in core-collapse supernovae (CCSN) is one of the most important sources of elements in the universe. Elements from Oxygen through Iron come predominantly from supernovae, and contributions of heavier elements are also possible through processes like the weak r-process, the gamma process and the light element primary process. The composition of the ejecta depends on the mechanism of the explosion, thus simulations of high physical fidelity are needed to explore what elements and isotopes CCSN can contribute to Galactic Chemical Evolution. We will analyze the nucleosynthesis results from self-consistent CCSN simulations performed with CHIMERA, a multi-dimensional neutrino radiation-hydrodynamics code. Much of our understanding of CCSN nucleosynthesis comes from parameterized models, but unlike CHIMERA these fail to address essential physics, including turbulent flow/instability and neutrino-matter interaction. We will present nucleosynthesis predictions for the explosion of a 9.6 solar mass first generation star, relying both on results of the 160 species nuclear reaction network used in CHIMERA within this model and on post-processing with a more extensive network. The lowest mass iron core-collapse supernovae, like this model, are distinct from their more massive brethren, with their explosion mechanism and nucleosynthesis being more like electron capture supernovae resulting from Oxygen-Neon white dwarves. We will highlight the differences between the nucleosynthesis in this model and more massive supernovae. The inline 160 species network is a feature unique to CHIMERA, making this the most sophisticated model to date for a star of this type. We will discuss the need and mechanism to extrapolate the post-processing to times post-simulation and analyze the uncertainties this introduces for supernova nucleosynthesis. We will also compare the results from the inline 160 species network to the post-processing results to study further uncertainties introduced by post-processing. This work is supported by the U.S. Department of Energy, Office of Nuclear Physics, and the National Science Foundation Nuclear Theory Program (PHY-1516197).

  10. Computer-aided liver volumetry: performance of a fully-automated, prototype post-processing solution for whole-organ and lobar segmentation based on MDCT imaging.

    PubMed

    Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T

    2015-06-01

    To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.

  11. Selective laser sintering of calcium phosphate materials for orthopedic implants

    NASA Astrophysics Data System (ADS)

    Lee, Goonhee

    Two technologies, Solid Freeform Fabrication (SFF) and bioceramics are combined in this work to prepare bone replacement implants with complex geometry. SFF has emerged as a crucial technique for rapid prototyping in the last decade. Selective Laser Sintering (SLS) is one of the established SFF manufacturing processes that can build three-dimensional objects directly from computer models without part-specific tooling or human intervention. Meanwhile, there have been great efforts to develop implantable materials that can assist in regeneration of bone defects and injuries. However, little attention has been focused in shaping bones from these materials. The main thrust of this research was to develop a process that can combine those two separate efforts. The specific objective of this research is to develop a process that can construct bone replacement material of complex geometry from synthetic calcium phosphate materials by using the SLS process. The achievement of this goal can have a significant impact on the quality of health care in the sense that complete custom-fit bone and tooth structures suitable for implantation can be prepared within 24--48 hours of receipt of geometric information obtained either from patient Computed Tomographic (CT) data, from Computer Aided Design (CAD) software or from other imaging systems such as Magnetic Resonance Imaging (MRI) and Holographic Laser Range Imaging (HLRI). In this research, two different processes have been developed. First is the SLS fabrication of porous bone implants. In this effort, systematic procedures have been established and calcium phosphate implants were successfully fabricated from various sources of geometric information. These efforts include material selection and preparation, SLS process parameter optimization, and development of post-processing techniques within the 48-hour time frame. Post-processing allows accurate control of geometry and of the chemistry of calcium phosphate, as well as control of micro and macro pore structure, to maximize bone healing and provide sufficient mechanical strength. It also permits the complete removal of the polymeric binders that are resided in the SLS process. In collaboration with the University of Texas Health Science Center at San Antonio and BioMedical Enterprises, Inc., porous implants based on anatomical geometry have been successfully implanted in rabbits and dogs. These histologic animal studies reveal excellent biocompatibility and show its great potential for commercial custom-fit implant manufacture. The second research effort involves fabrication of fully dense bone for application in dental restoration and load-bearing orthopedic functions. Calcium phosphate glass melts, proven to be biocompatible in the first effort, were cast into carbon molds. Processes were developed for preparing the molds. These carbon molds of anatomic shape can be prepared from either Computer Numerical Control (CNC) milling of slab stock or SLS processing of thermoset-coated graphite powder. The CNC milling method provides accurate dimension of the molds in a short period of time, however, the capable geometries are limited; generally two pieces of molds are required for complex shapes. The SLS method provides very complex shape green molds. However, they need to go through pyrolysis of thermoset binder to provide the high temperature capability reached at calcium phosphate melt temperatures (1100°C) and noticeable shrinkage was observed during pyrolysis. The cast glass was annealed to develop polycrystalline calcium phosphate. This process also exhibits great potential.

  12. LabData database sub-systems for post-processing and quality control of stable isotope and gas chromatography measurements

    NASA Astrophysics Data System (ADS)

    Suckow, A. O.

    2013-12-01

    Measurements need post-processing to obtain results that are comparable between laboratories. Raw data may need to be corrected for blank, memory, drift (change of reference values with time), linearity (dependence of reference on signal height) and normalized to international reference materials. Post-processing parameters need to be stored for traceability of results. State of the art stable isotope correction schemes are available based on MS Excel (Geldern and Barth, 2012; Gröning, 2011) or MS Access (Coplen, 1998). These are specialized to stable isotope measurements only, often only to the post-processing of a special run. Embedding of algorithms into a multipurpose database system was missing. This is necessary to combine results of different tracers (3H, 3He, 2H, 18O, CFCs, SF6...) or geochronological tools (Sediment dating e.g. with 210Pb, 137Cs), to relate to attribute data (submitter, batch, project, geographical origin, depth in core, well information etc.) and for further interpretation tools (e.g. lumped parameter modelling). Database sub-systems to the LabData laboratory management system (Suckow and Dumke, 2001) are presented for stable isotopes and for gas chromatographic CFC and SF6 measurements. The sub-system for stable isotopes allows the following post-processing: 1. automated import from measurement software (Isodat, Picarro, LGR), 2. correction for sample-to sample memory, linearity, drift, and renormalization of the raw data. The sub-system for gas chromatography covers: 1. storage of all raw data 2. storage of peak integration parameters 3. correction for blank, efficiency and linearity The user interface allows interactive and graphical control of the post-processing and all corrections by export to and plot in MS Excel and is a valuable tool for quality control. The sub-databases are integrated into LabData, a multi-user client server architecture using MS SQL server as back-end and an MS Access front-end and installed in four laboratories to date. Attribute data storage (unique ID for each subsample, origin, project context etc.) and laboratory management features are included. Export routines to Excel (depth profiles, time series, all possible tracer-versus tracer plots...) and modelling capabilities are add-ons. The source code is public domain and available under the GNU general public licence agreement (GNU-GPL). References Coplen, T.B., 1998. A manual for a laboratory information management system (LIMS) for light stable isotopes. Version 7.0. USGS open file report 98-284. Geldern, R.v., Barth, J.A.C., 2012. Optimization of instrument setup and post-run corrections for oxygen and hydrogen stable isotope measurements of water by isotope ratio infrared spectroscopy (IRIS). Limnology and Oceanography: Methods 10, 1024-1036. Gröning, M., 2011. Improved water δ2H and δ18O calibration and calculation of measurement uncertainty using a simple software tool. Rapid Communications in Mass Spectrometry 25, 2711-2720. Suckow, A., Dumke, I., 2001. A database system for geochemical, isotope hydrological and geochronological laboratories. Radiocarbon 43, 325-337.

  13. Custom 3D Printable Silicones with Tunable Stiffness.

    PubMed

    Durban, Matthew M; Lenhardt, Jeremy M; Wu, Amanda S; Small, Ward; Bryson, Taylor M; Perez-Perez, Lemuel; Nguyen, Du T; Gammon, Stuart; Smay, James E; Duoss, Eric B; Lewicki, James P; Wilson, Thomas S

    2018-02-01

    Silicone elastomers have broad versatility within a variety of potential advanced materials applications, such as soft robotics, biomedical devices, and metamaterials. A series of custom 3D printable silicone inks with tunable stiffness is developed, formulated, and characterized. The silicone inks exhibit excellent rheological behavior for 3D printing, as observed from the printing of porous structures with controlled architectures. Herein, the capability to tune the stiffness of printable silicone materials via careful control over the chemistry, network formation, and crosslink density of the ink formulations in order to overcome the challenging interplay between ink development, post-processing, material properties, and performance is demonstrated. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The application of NASCAD as a NASTRAN pre- and post-processor

    NASA Technical Reports Server (NTRS)

    Peltzman, Alan N.

    1987-01-01

    The NASA Computer Aided Design (NASCAD) graphics package provides an effective way to interactively create, view, and refine analytic data models. NASCAD's macro language, combined with its powerful 3-D geometric data base allows the user important flexibility and speed in constructing his model. This flexibility has the added benefit of enabling the user to keep pace with any new NASTRAN developments. NASCAD allows models to be conveniently viewed and plotted to best advantage in both pre- and post-process phases of development, providing useful visual feedback to the analysis process. NASCAD, used as a graphics compliment to NASTRAN, can play a valuable role in the process of finite element modeling.

  15. Technology development towards WFIRST-AFTA coronagraph

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Ilya; Zhao, Feng; An, Xin; Balasubramanian, Kunjithapatham; Belikov, Ruslan; Cady, Eric; Demers, Richard; Diaz, Rosemary; Gong, Qian; Gordon, Brian; Goullioud, Renaud; Greer, Frank; Guyon, Olivier; Hoenk, Michael; Kasdin, N. Jeremy; Kern, Brian; Krist, John; Kuhnert, Andreas; McElwain, Michael; Mennesson, Bertrand; Moody, Dwight; Muller, Richard; Nemati, Bijan; Patterson, Keith; Riggs, A. J.; Ryan, Daniel; Seo, Byoung-Joon; Shaklan, Stuart; Sidick, Erkin; Shi, Fang; Siegler, Nicholas; Soummer, Rémi; Tang, Hong; Trauger, John; Wallace, J. Kent; Wang, Xu; White, Victor; Wilson, Daniel; Yee, Karl; Zhou, Hanying; Zimmerman, Neil

    2014-08-01

    NASA's WFIRST-AFTA mission concept includes the first high-contrast stellar coronagraph in space. This coronagraph will be capable of directly imaging and spectrally characterizing giant exoplanets similar to Neptune and Jupiter, and possibly even super-Earths, around nearby stars. In this paper we present the plan for maturing coronagraph technology to TRL5 in 2014-2016, and the results achieved in the first 6 months of the technology development work. The specific areas that are discussed include coronagraph testbed demonstrations in static and simulated dynamic environment, design and fabrication of occulting masks and apodizers used for starlight suppression, low-order wavefront sensing and control subsystem, deformable mirrors, ultra-low-noise spectrograph detector, and data post-processing.

  16. A web platform for integrated surface water - groundwater modeling and data management

    NASA Astrophysics Data System (ADS)

    Fatkhutdinov, Aybulat; Stefan, Catalin; Junghanns, Ralf

    2016-04-01

    Model-based decision support systems are considered to be reliable and time-efficient tools for resources management in various hydrology related fields. However, searching and acquisition of the required data, preparation of the data sets for simulations as well as post-processing, visualization and publishing of the simulations results often requires significantly more work and time than performing the modeling itself. The purpose of the developed software is to combine data storage facilities, data processing instruments and modeling tools in a single platform which potentially can reduce time required for performing simulations, hence decision making. The system is developed within the INOWAS (Innovative Web Based Decision Support System for Water Sustainability under a Changing Climate) project. The platform integrates spatially distributed catchment scale rainfall - runoff, infiltration and groundwater flow models with data storage, processing and visualization tools. The concept is implemented in a form of a web-GIS application and is build based on free and open source components, including the PostgreSQL database management system, Python programming language for modeling purposes, Mapserver for visualization and publishing the data, Openlayers for building the user interface and others. Configuration of the system allows performing data input, storage, pre- and post-processing and visualization in a single not disturbed workflow. In addition, realization of the decision support system in the form of a web service provides an opportunity to easily retrieve and share data sets as well as results of simulations over the internet, which gives significant advantages for collaborative work on the projects and is able to significantly increase usability of the decision support system.

  17. On the difficulty to delimit disease risk hot spots

    NASA Astrophysics Data System (ADS)

    Charras-Garrido, M.; Azizi, L.; Forbes, F.; Doyle, S.; Peyrard, N.; Abrial, D.

    2013-06-01

    Representing the health state of a region is a helpful tool to highlight spatial heterogeneity and localize high risk areas. For ease of interpretation and to determine where to apply control procedures, we need to clearly identify and delineate homogeneous regions in terms of disease risk, and in particular disease risk hot spots. However, even if practical purposes require the delineation of different risk classes, such a classification does not correspond to a reality and is thus difficult to estimate. Working with grouped data, a first natural choice is to apply disease mapping models. We apply a usual disease mapping model, producing continuous estimations of the risks that requires a post-processing classification step to obtain clearly delimited risk zones. We also apply a risk partition model that build a classification of the risk levels in a one step procedure. Working with point data, we will focus on the scan statistic clustering method. We illustrate our article with a real example concerning the bovin spongiform encephalopathy (BSE) an animal disease whose zones at risk are well known by the epidemiologists. We show that in this difficult case of a rare disease and a very heterogeneous population, the different methods provide risk zones that are globally coherent. But, related to the dichotomy between the need and the reality, the exact delimitation of the risk zones, as well as the corresponding estimated risks are quite different.

  18. Post-processing of fused silica and its effects on damage resistance to nanosecond pulsed UV lasers.

    PubMed

    Ye, Hui; Li, Yaguo; Zhang, Qinghua; Wang, Wei; Yuan, Zhigang; Wang, Jian; Xu, Qiao

    2016-04-10

    HF-based (hydrofluoric acid) chemical etching has been a widely accepted technique to improve the laser damage performance of fused silica optics and ensure high-power UV laser systems at designed fluence. Etching processes such as acid concentration, composition, material removal amount, and etching state (etching with additional acoustic power or not) may have a great impact on the laser-induced damage threshold (LIDT) of treated sample surfaces. In order to find out the effects of these factors, we utilized the Taguchi method to determine the etching conditions that are helpful in raising the LIDT. Our results show that the most influential factors are concentration of etchants and the material etched away from the viewpoint of damage performance of fused silica optics. In addition, the additional acoustic power (∼0.6  W·cm-2) may not benefit the etching rate and damage performance of fused silica. Moreover, the post-cleaning procedure of etched samples is also important in damage performances of fused silica optics. Different post-cleaning procedures were, thus, experiments on samples treated under the same etching conditions. It is found that the "spraying + rinsing + spraying" cleaning process is favorable to the removal of etching-induced deposits. Residuals on the etched surface are harmful to surface roughness and optical transmission as well as laser damage performance.

  19. A COMPARISON OF TRANSIENT INFINITE ELEMENTS AND TRANSIENT KIRCHHOFF INTEGRAL METHODS FOR FAR FIELD ACOUSTIC ANALYSIS

    DOE PAGES

    WALSH, TIMOTHY F.; JONES, ANDREA; BHARDWAJ, MANOJ; ...

    2013-04-01

    Finite element analysis of transient acoustic phenomena on unbounded exterior domains is very common in engineering analysis. In these problems there is a common need to compute the acoustic pressure at points outside of the acoustic mesh, since meshing to points of interest is impractical in many scenarios. In aeroacoustic calculations, for example, the acoustic pressure may be required at tens or hundreds of meters from the structure. In these cases, a method is needed for post-processing the acoustic results to compute the response at far-field points. In this paper, we compare two methods for computing far-field acoustic pressures, onemore » derived directly from the infinite element solution, and the other from the transient version of the Kirchhoff integral. Here, we show that the infinite element approach alleviates the large storage requirements that are typical of Kirchhoff integral and related procedures, and also does not suffer from loss of accuracy that is an inherent part of computing numerical derivatives in the Kirchhoff integral. In order to further speed up and streamline the process of computing the acoustic response at points outside of the mesh, we also address the nonlinear iterative procedure needed for locating parametric coordinates within the host infinite element of far-field points, the parallelization of the overall process, linear solver requirements, and system stability considerations.« less

  20. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  1. fMRI paradigm designing and post-processing tools

    PubMed Central

    James, Jija S; Rajesh, PG; Chandran, Anuvitha VS; Kesavadas, Chandrasekharan

    2014-01-01

    In this article, we first review some aspects of functional magnetic resonance imaging (fMRI) paradigm designing for major cognitive functions by using stimulus delivery systems like Cogent, E-Prime, Presentation, etc., along with their technical aspects. We also review the stimulus presentation possibilities (block, event-related) for visual or auditory paradigms and their advantage in both clinical and research setting. The second part mainly focus on various fMRI data post-processing tools such as Statistical Parametric Mapping (SPM) and Brain Voyager, and discuss the particulars of various preprocessing steps involved (realignment, co-registration, normalization, smoothing) in these software and also the statistical analysis principles of General Linear Modeling for final interpretation of a functional activation result. PMID:24851001

  2. Quantum cryptography with finite resources: unconditional security bound for discrete-variable protocols with one-way postprocessing.

    PubMed

    Scarani, Valerio; Renner, Renato

    2008-05-23

    We derive a bound for the security of quantum key distribution with finite resources under one-way postprocessing, based on a definition of security that is composable and has an operational meaning. While our proof relies on the assumption of collective attacks, unconditional security follows immediately for standard protocols such as Bennett-Brassard 1984 and six-states protocol. For single-qubit implementations of such protocols, we find that the secret key rate becomes positive when at least N approximately 10(5) signals are exchanged and processed. For any other discrete-variable protocol, unconditional security can be obtained using the exponential de Finetti theorem, but the additional overhead leads to very pessimistic estimates.

  3. Towards process-informed bias correction of climate change simulations

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.

    2017-11-01

    Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.

  4. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.

  5. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.

  6. New techniques for fluorescence background rejection in microscopy and endoscopy

    NASA Astrophysics Data System (ADS)

    Ventalon, Cathie

    2009-03-01

    Confocal microscopy is a popular technique in the bioimaging community, mainly because it provides optical sectioning. However, its standard implementation requires 3-dimensional scanning of focused illumination throughout the sample. Efficient non-scanning alternatives have been implemented, among which the simple and well-established incoherent structured illumination microscopy (SIM) [1]. We recently proposed a similar technique, called Dynamic Speckle Illumination (DSI) microscopy, wherein the incoherent grid illumination pattern is replaced with a coherent speckle illumination pattern from a laser, taking advantage of the fact that speckle contrast is highly maintained in a scattering media, making the technique well adapted to tissue imaging [2]. DSI microscopy relies on the illumination of a sample with a sequence of dynamic speckle patterns and an image processing algorithm based only on an a priori knowledge of speckle statistics. The choice of this post-processing algorithm is crucial to obtain a good sectioning strength: in particular, we developed a novel post-processing algorithm based one wavelet pre-filtering of the raw images and obtained near-confocal fluorescence sectioning in a mouse brain labeled with GFP, with a good image quality maintained throughout a depth of ˜100 μm [3]. In the purpose of imaging fluorescent tissue at higher depth, we recently applied structured illumination to endoscopy. We used a similar set-up wherein the illumination pattern (a one-dimensional grid) is transported to the sample with an imaging fiber bundle with miniaturized objective and the fluorescence image is collected through the same bundle. Using a post-processing algorithm similar to the one previously described [3], we obtained high-quality images of a fluorescein-labeled rat colonic mucosa [4], establishing the potential of our endomicroscope for bioimaging applications. [4pt] Ref: [0pt] [1] M. A. A. Neil et al, Opt. Lett. 22, 1905 (1997) [0pt] [2] C. Ventalon et al, Opt. Lett. 30, 3350 (2005) [0pt] [3] C. Ventalon et al, Opt. Lett. 32, 1417 (2007) [0pt] [4] N. Bozinovic et al, Opt. Express 16, 8016 (2008)

  7. Quantitative assessment of angiographic perfusion reduction using color-coded digital subtraction angiography during transarterial chemoembolization.

    PubMed

    Wang, Ji; Cheng, Jie-Jun; Huang, Kai-Yi; Zhuang, Zhi-Guo; Zhang, Xue-Bin; Chi, Jia-Chang; Hua, Xiao-Lan; Xu, Jian-Rong

    2016-03-01

    The aim of this study was to develop a quantitative measurement of perfusion reduction using color-coded digital subtraction angiography (ccDSA) to monitor intra-procedural arterial stasis during TACE. A total number of 35 patients with hepatocellular carcinoma who had undergone TACE were enrolled into the study. Pre- and post-two-dimensional digital subtraction angiography scans were conducted with same protocol and post-processed with ccDSA prototype software. Time-contrast-intensity (CI[t]) curve was obtained by region-of-interest (ROI) measurement on the generated ccDSA image. Quantitative 2D perfusion parameters time to peak, area under the curve (AUC), maximum upslope, and contrast intensity peak (CI-Peak) derived from the ROI-based CI[t] curve for pre- and post-TACE were evaluated to assess the reduction of antegrade blood flow and tumor blush. Relationships between 2D perfusion parameters, subjective angiographic chemoembolization endpoint (SACE) scale, and clinical outcomes were analyzed. Area normalized AUC and CI-Peak revealed significant reduction after the TACE (P < 0.0001). AUCnorm decreased from pre-procedure of 0.867 ± 0.242 to 0.421 ± 0.171 (P < 0.001) after completion of TACE. CI-Peaknorm was 0.739 ± 0.221 before TACE and 0.421 ± 0.174 (P < 0.001) after TACE. Tumor blood supply time slowed down obviously after embolization. A perfusion reduction either from AUCnorm or CI-Peaknorm ranging from 30% to 40% was associated with SACE level III and a reduction ranging from 60% to 70% was equivalent to SACE level IV. For intermediate reduction (SACE level III), better tumor response was found after TACE rather than a higher reduction (SACE level IV). ccDSA application provides an objective approach to quantify the perfusion reduction and subjectively evaluate the arterial stasis of antegrade blood flow and tumor blush caused by TACE.

  8. Development of a Dmt Monitor for Statistical Tracking of Gravitational-Wave Burst Triggers Generated from the Omega Pipeline

    NASA Astrophysics Data System (ADS)

    Li, Jun-Wei; Cao, Jun-Wei

    2010-04-01

    One challenge in large-scale scientific data analysis is to monitor data in real-time in a distributed environment. For the LIGO (Laser Interferometer Gravitational-wave Observatory) project, a dedicated suit of data monitoring tools (DMT) has been developed, yielding good extensibility to new data type and high flexibility to a distributed environment. Several services are provided, including visualization of data information in various forms and file output of monitoring results. In this work, a DMT monitor, OmegaMon, is developed for tracking statistics of gravitational-wave (OW) burst triggers that are generated from a specific OW burst data analysis pipeline, the Omega Pipeline. Such results can provide diagnostic information as reference of trigger post-processing and interferometer maintenance.

  9. The metallurgy and processing science of metal additive manufacturing

    DOE PAGES

    Sames, William J.; List, III, Frederick Alyious; Pannala, Sreekanth; ...

    2016-03-07

    Here, additive Manufacturing (AM), widely known as 3D printing, is a method of manufacturing that forms parts from powder, wire, or sheets in a process that proceeds layer-by-layer.Many techniques (using many different names) have been developed to accomplish this via melting or solid - state joining. In this review, these techniques for producing metal parts are explored, with a focus on the science of metal AM: processing defects, heat transfer, solidification, solid- state precipitation, mechanical properties, and post-processing metallurgy. The various metal AM techniques are compared, with analysis of the strengths and limitations of each. Few alloys have been developedmore » for commercial production, but recent development efforts are presented as a path for the ongoing development of new materials for AM processes.« less

  10. Post-processing of 3D-printed parts using femtosecond and picosecond laser radiation

    NASA Astrophysics Data System (ADS)

    Mingareev, Ilya; Gehlich, Nils; Bonhoff, Tobias; Meiners, Wilhelm; Kelbassa, Ingomar; Biermann, Tim; Richardson, Martin C.

    2014-03-01

    Additive manufacturing, also known as 3D-printing, is a near-net shape manufacturing approach, delivering part geometry that can be considerably affected by various process conditions, heat-induced distortions, solidified melt droplets, partially fused powders, and surface modifications induced by the manufacturing tool motion and processing strategy. High-repetition rate femtosecond and picosecond laser radiation was utilized to improve surface quality of metal parts manufactured by laser additive techniques. Different laser scanning approaches were utilized to increase the ablation efficiency and to reduce the surface roughness while preserving the initial part geometry. We studied post-processing of 3D-shaped parts made of Nickel- and Titanium-base alloys by utilizing Selective Laser Melting (SLM) and Laser Metal Deposition (LMD) as additive manufacturing techniques. Process parameters such as the pulse energy, the number of layers and their spatial separation were varied. Surface processing in several layers was necessary to remove the excessive material, such as individual powder particles, and to reduce the average surface roughness from asdeposited 22-45 μm to a few microns. Due to the ultrafast laser-processing regime and the small heat-affected zone induced in materials, this novel integrated manufacturing approach can be used to post-process parts made of thermally and mechanically sensitive materials, and to attain complex designed shapes with micrometer precision.

  11. Pre- and postprocessing for reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, W.L.; Ingalls, L.J.; Prasad, S.J.

    1991-05-01

    This paper describes the functionality and underlying programing paradigms of Shell's simulator-related reservoir-engineering graphics system. THis system includes the simulation postprocessing programs Reservoir Display System (RDS) and Fast Reservoir Engineering Displays (FRED), a hypertext-like on-line documentation system (DOC), and a simulator input preprocessor (SIMPLSIM). RDS creates displays of reservoir simulation results. These displays represent the areal or cross-section distribution of computer reservoir parameters, such as pressure, phase saturation, or temperature. Generation of these images at real-time animation rates is discussed. FRED facilitates the creation of plot files from reservoir simulation output. The use of dynamic memory allocation, asynchronous I/O, amore » table-driven screen manager, and mixed-language (FORTRAN and C) programming are detailed. DOC is used to create and access on-line documentation for the pre-and post-processing programs and the reservoir simulators. DOC can be run by itself or can be accessed from within any other graphics or nongraphics application program. DOC includes a text editor, which is that basis for a reservoir simulation tutorial and greatly simplifies the preparation of simulator input. The use of sharable images, graphics, and the documentation file network are described. Finally, SIMPLSIM is a suite of program that uses interactive graphics in the preparation of reservoir description data for input into reservoir simulators. The SIMPLSIM user-interface manager (UIM) and its graphic interface for reservoir description are discussed.« less

  12. Modification of fast-growing Chinese Fir wood with unsaturated polyester resin: Impregnation technology and efficiency

    NASA Astrophysics Data System (ADS)

    Ma, Qing; Zhao, Zijian; Yi, Songlin; Wang, Tianlong

    In this study, Chinese Fir was impregnated with unsaturated polyester resin to enhance its properties. Samples 20 mm × 20 mm × 20 mm in size were split into different sections with epoxy resin and tinfoil and subjected to an impregnation experiment under various parameters. Vacuum degree was -0.04 MPa, -0.06 MPa or -0.08 MPa and vacuum duration was 15 min, 30 min, or 45 min. The results indicated that impregnation weight percent gain is linearly dependent on curing weight percent gain. Vacuum duration appears to have less influence on the curing weight percent gain than vacuum degree, and impregnation was most successful at the transverse section compared to other sections. The optimal impregnation parameters were 30 min modification under -0.08 MPa vacuum followed by 120 min at atmospheric pressure for samples 200 mm × 100 mm × 20 mm in size. Uneven distribution of weight percent gain and cracking during the curing process suggested that 30 min post-processing at -0.09 MPa vacuum was the most effective way to complete the impregnation process. The sample's bending strength and modulus of elasticity increased after impregnation treatment. Bending strength after impregnation without post-processing reached 112.85%, but reached 71.65% with vacuum-processing; modulus of elasticity improved 67.13% and 58.28% without and with post-processing, respectively.

  13. Auditory post-processing in a passive listening task is deficient in Alzheimer's disease.

    PubMed

    Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine

    2014-01-01

    To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. On the construction of a ground truth framework for evaluating voxel-based diffusion tensor MRI analysis methods.

    PubMed

    Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander

    2009-07-01

    Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).

  15. PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators

    DOE PAGES

    Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.; ...

    2018-02-13

    Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory (DFT), have found widespread use in the calculation of point-defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT) tomore » expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. As a result, we anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.« less

  16. Association rule mining in the US Vaccine Adverse Event Reporting System (VAERS).

    PubMed

    Wei, Lai; Scott, John

    2015-09-01

    Spontaneous adverse event reporting systems are critical tools for monitoring the safety of licensed medical products. Commonly used signal detection algorithms identify disproportionate product-adverse event pairs and may not be sensitive to more complex potential signals. We sought to develop a computationally tractable multivariate data-mining approach to identify product-multiple adverse event associations. We describe an application of stepwise association rule mining (Step-ARM) to detect potential vaccine-symptom group associations in the US Vaccine Adverse Event Reporting System. Step-ARM identifies strong associations between one vaccine and one or more adverse events. To reduce the number of redundant association rules found by Step-ARM, we also propose a clustering method for the post-processing of association rules. In sample applications to a trivalent intradermal inactivated influenza virus vaccine and to measles, mumps, rubella, and varicella (MMRV) vaccine and in simulation studies, we find that Step-ARM can detect a variety of medically coherent potential vaccine-symptom group signals efficiently. In the MMRV example, Step-ARM appears to outperform univariate methods in detecting a known safety signal. Our approach is sensitive to potentially complex signals, which may be particularly important when monitoring novel medical countermeasure products such as pandemic influenza vaccines. The post-processing clustering algorithm improves the applicability of the approach as a screening method to identify patterns that may merit further investigation. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Echocardiographic strain and strain-rate imaging: a new tool to study regional myocardial function.

    PubMed

    D'hooge, Jan; Bijnens, Bart; Thoen, Jan; Van de Werf, Frans; Sutherland, George R; Suetens, Paul

    2002-09-01

    Ultrasonic imaging is the noninvasive clinical imaging modality of choice for diagnosing heart disease. At present, two-dimensional ultrasonic grayscale images provide a relatively cheap, fast, bedside method to study the morphology of the heart. Several methods have been proposed to assess myocardial function. These have been based on either grayscale or motion (velocity) information measured in real-time. However, the quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. To do this, ultrasonic strain and strain-rate imaging have been introduced. In the clinical setting, these techniques currently only allow one component of the true three-dimensional deformation to be measured. Clinical, multidimensional strain (rate) information can currently thus only be obtained by combining data acquired using different transducer positions. Nevertheless, given the appropriate postprocessing, the clinical value of these techniques has already been shown. Moreover, multidimensional strain and strain-rate estimation of the heart in vivo by means of a single ultrasound acquisition has been shown to be feasible. In this paper, the new techniques of ultrasonic strain rate and strain imaging of the heart are reviewed in terms of definitions, data acquisition, strain-rate estimation, postprocessing, and parameter extraction. Their clinical validation and relevance will be discussed using clinical examples on relevant cardiac pathology. Based on these examples, suggestions are made for future developments of these techniques.

  18. Design of a Data Catalogue for Perdigão-2017 Field Experiment: Establishing the Relevant Parameters, Post-Processing Techniques and Users Access

    NASA Astrophysics Data System (ADS)

    Palma, J. L.; Belo-Pereira, M.; Leo, L. S.; Fernando, J.; Wildmann, N.; Gerz, T.; Rodrigues, C. V.; Lopes, A. S.; Lopes, J. C.

    2017-12-01

    Perdigão is the largest of a series of wind-mapping studies embedded in the on-going NEWA (New European Wind Atlas) Project. The intensive observational period of the Perdigão field experiment resulted in an unprecedented volume of data, covering several wind conditions through 46 consecutive days between May and June 2017. For researchers looking into specific events, it is time consuming to scrutinise the datasets looking for appropriate conditions. Such task becomes harder if the parameters of interest were not measured directly, instead requiring their computation from the raw datasets. This work will present the e-Science platform developed by University of Porto for the Perdigao dataset. The platform will assist scientists of Perdigao and the larger scientific community in extrapolating the datasets associated to specific flow regimes of interest as well as automatically performing post-processing/filtering operations internally in the platform. We will illustrate the flow regime categories identified in Perdigao based on several parameters such as weather type classification, cloud characteristics, as well as stability regime indicators (Brunt-Väisälä frequency, Scorer parameter, potential temperature inversion heights, dimensionless Richardson and Froude numbers) and wind regime indicators. Examples of some of the post-processing techniques available in the e-Science platform, such as the Savitzky-Golay low-pass filtering technique, will be also presented.

  19. PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators

    NASA Astrophysics Data System (ADS)

    Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.; Yu, Guodong; Canning, Andrew; Haranczyk, Maciej; Asta, Mark; Hautier, Geoffroy

    2018-05-01

    Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory (DFT), have found widespread use in the calculation of point-defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT) to expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. We anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.

  20. PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.

    Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory DFT), have found widespread use in the calculation of point defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT)more » to expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. We anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.« less

  1. PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.

    Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory (DFT), have found widespread use in the calculation of point-defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT) tomore » expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. As a result, we anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.« less

  2. Technical innovation changes standard radiographic protocols in veterinary medicine: is it necessary to obtain two dorsoproximal-palmarodistal oblique views of the equine foot when using computerised radiography systems?

    PubMed

    Whitlock, J; Dixon, J; Sherlock, C; Tucker, R; Bolt, D M; Weller, R

    2016-05-21

    Since the 1950s, veterinary practitioners have included two separate dorsoproximal-palmarodistal oblique (DPr-PaDiO) radiographs as part of a standard series of the equine foot. One image is obtained to visualise the distal phalanx and the other to visualise the navicular bone. However, rapid development of computed radiography and digital radiography and their post-processing capabilities could mean that this practice is no longer required. The aim of this study was to determine differences in perceived image quality between DPr-PaDiO radiographs that were acquired with a computerised radiography system with exposures, centring and collimation recommended for the navicular bone versus images acquired for the distal phalanx but were subsequently manipulated post-acquisition to highlight the navicular bone. Thirty images were presented to four clinicians for quality assessment and graded using a 1-3 scale (1=textbook quality, 2=diagnostic quality, 3=non-diagnostic image). No significant difference in diagnostic quality was found between the original navicular bone images and the manipulated distal phalanx images. This finding suggests that a single DPr-PaDiO image of the distal phalanx is sufficient for an equine foot radiographic series, with appropriate post-processing and manipulation. This change in protocol will result in reduced radiographic study time and decreased patient/personnel radiation exposure. British Veterinary Association.

  3. Total Face, Eyelids, Ears, Scalp, and Skeletal Subunit Transplant Research Procurement: A Translational Simulation Model.

    PubMed

    Sosin, Michael; Ceradini, Daniel J; Hazen, Alexes; Sweeney, Nicole G; Brecht, Lawrence E; Levine, Jamie P; Staffenberg, David A; Saadeh, Pierre B; Bernstein, G Leslie; Rodriguez, Eduardo D

    2016-05-01

    Cadaveric face transplant models are routinely used for technical allograft design, perfusion assessment, and transplant simulation but are associated with substantial limitations. The purpose of this study was to describe the experience of implementing a translational donor research facial procurement and solid organ allograft recovery model. Institutional review board approval was obtained, and a 49-year-old, brain-dead donor was identified for facial vascularized composite allograft research procurement. The family generously consented to donation of solid organs and the total face, eyelids, ears, scalp, and skeletal subunit allograft. The successful sequence of computed tomographic scanning, fabrication and postprocessing of patient-specific cutting guides, tracheostomy placement, preoperative fluorescent angiography, silicone mask facial impression, donor facial allograft recovery, postprocurement fluorescent angiography, and successful recovery of kidneys and liver occurred without any donor instability. Preservation of the bilateral external carotid arteries, facial arteries, occipital arteries, and bilateral thyrolinguofacial and internal jugular veins provided reliable and robust perfusion to the entirety of the allograft. Total time of facial procurement was 10 hours 57 minutes. Essential to clinical face transplant outcomes is the preparedness of the institution, multidisciplinary face transplant team, organ procurement organization, and solid organ transplant colleagues. A translational facial research procurement and solid organ recovery model serves as an educational experience to modify processes and address procedural, anatomical, and logistical concerns for institutions developing a clinical face transplantation program. This methodical approach best simulates the stressors and challenges that can be expected during clinical face transplantation. Therapeutic, V.

  4. Method for automatic detection of wheezing in lung sounds.

    PubMed

    Riella, R J; Nohama, P; Maia, J M

    2009-07-01

    The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.

  5. Spectrally and Radiometrically Stable Wide-Band on Board Calibration Source for In-Flight Data Validation in Imaging Spectroscopy Applications

    NASA Technical Reports Server (NTRS)

    Coles, J. B.; Richardson, Brandon S.; Eastwood, Michael L.; Sarture, Charles M.; Quetin, Gregory R.; Hernandez, Marco A.; Kroll, Linley A.; Nolte, Scott H.; Porter, Michael D.; Green, Robert O.

    2011-01-01

    The quality of the quantitative spectral data collected by an imaging spectrometer instrument is critically dependent upon the accuracy of the spectral and radiometric calibration of the system. In order for the collected spectra to be scientifically useful, the calibration of the instrument must be precisely known not only prior to but during data collection. Thus, in addition to a rigorous in-lab calibration procedure, the airborne instruments designed and built by the NASA/JPL Imaging Spectroscopy Group incorporate an on board calibrator (OBC) system with the instrument to provide auxiliary in-use system calibration data. The output of the OBC source illuminates a target panel on the backside of the foreoptics shutter both before and after data collection. The OBC and in-lab calibration data sets are then used to validate and post-process the collected spectral image data. The resulting accuracy of the spectrometer output data is therefore integrally dependent upon the stability of the OBC source. In this paper we describe the design and application of the latest iteration of this novel device developed at NASA/JPL which integrates a halogen-cycle source with a precisely designed fiber coupling system and a fiber-based intensity monitoring feedback loop. The OBC source in this Airborne Testbed Spectrometer was run over a period of 15 hours while both the radiometric and spectral stabilities of the output were measured and demonstrated stability to within 1% of nominal.

  6. How well can charge transfer inefficiency be corrected? A parameter sensitivity study for iterative correction

    NASA Astrophysics Data System (ADS)

    Israel, Holger; Massey, Richard; Prod'homme, Thibaut; Cropper, Mark; Cordes, Oliver; Gow, Jason; Kohley, Ralf; Marggraf, Ole; Niemi, Sami; Rhodes, Jason; Short, Alex; Verhoeve, Peter

    2015-10-01

    Radiation damage to space-based charge-coupled device detectors creates defects which result in an increasing charge transfer inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect - and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrades measurements of photometry and morphology. As a concrete application, we simulate 1.5 × 109 `worst-case' galaxy and 1.5 × 108 star images to test the performance of the Euclid visual instrument detectors. There are two separable challenges. If the model used to correct CTI is perfectly the same as that used to add CTI, 99.68 per cent of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets overcorrected during correction. Secondly, if we assume the first issue to be solved, knowledge of the charge trap density within Δρ/ρ = (0.0272 ± 0.0005) per cent and the characteristic release time of the dominant species to be known within Δτ/τ = (0.0400 ± 0.0004) per cent will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for Euclid.

  7. Development of Advanced Signal Processing and Source Imaging Methods for Superparamagnetic Relaxometry

    PubMed Central

    Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.

    2017-01-01

    Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579

  8. Development of a generic auto-calibration package for regional ecological modeling and application in the Central Plains of the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer

    2014-01-01

    Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.

  9. Application of a post-docking procedure based on MM-PBSA and MM-GBSA on single and multiple protein conformations.

    PubMed

    Sgobba, Miriam; Caporuscio, Fabiana; Anighoro, Andrew; Portioli, Corinne; Rastelli, Giulio

    2012-12-01

    In the last decades, molecular docking has emerged as an increasingly useful tool in the modern drug discovery process, but it still needs to overcome many hurdles and limitations such as how to account for protein flexibility and poor scoring function performance. For this reason, it has been recognized that in many cases docking results need to be post-processed to achieve a significant agreement with experimental activities. In this study, we have evaluated the performance of MM-PBSA and MM-GBSA scoring functions, implemented in our post-docking procedure BEAR, in rescoring docking solutions. For the first time, the performance of this post-docking procedure has been evaluated on six different biological targets (namely estrogen receptor, thymidine kinase, factor Xa, adenosine deaminase, aldose reductase, and enoyl ACP reductase) by using i) both a single and a multiple protein conformation approach, and ii) two different software, namely AutoDock and LibDock. The assessment has been based on two of the most important criteria for the evaluation of docking methods, i.e., the ability of known ligands to enrich the top positions of a ranked database with respect to molecular decoys, and the consistency of the docking poses with crystallographic binding modes. We found that, in many cases, MM-PBSA and MM-GBSA are able to yield higher enrichment factors compared to those obtained with the docking scoring functions alone. However, for only a minority of the cases, the enrichment factors obtained by using multiple protein conformations were higher than those obtained by using only one protein conformation. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  10. Advances in Global Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Bozdag, E.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Modrak, R. T.; Orsvuran, R.; Smith, J. A.; Komatitsch, D.; Peter, D. B.

    2017-12-01

    Information about Earth's interior comes from seismograms recorded at its surface. Seismic imaging based on spectral-element and adjoint methods has enabled assimilation of this information for the construction of 3D (an)elastic Earth models. These methods account for the physics of wave excitation and propagation by numerically solving the equations of motion, and require the execution of complex computational procedures that challenge the most advanced high-performance computing systems. Current research is petascale; future research will require exascale capabilities. The inverse problem consists of reconstructing the characteristics of the medium from -often noisy- observations. A nonlinear functional is minimized, which involves both the misfit to the measurements and a Tikhonov-type regularization term to tackle inherent ill-posedness. Achieving scalability for the inversion process on tens of thousands of multicore processors is a task that offers many research challenges. We initiated global "adjoint tomography" using 253 earthquakes and produced the first-generation model named GLAD-M15, with a transversely isotropic model parameterization. We are currently running iterations for a second-generation anisotropic model based on the same 253 events. In parallel, we continue iterations for a transversely isotropic model with a larger dataset of 1,040 events to determine higher-resolution plume and slab images. A significant part of our research has focused on eliminating I/O bottlenecks in the adjoint tomography workflow. This has led to the development of a new Adaptable Seismic Data Format based on HDF5, and post-processing tools based on the ADIOS library developed by Oak Ridge National Laboratory. We use the Ensemble Toolkit for workflow stabilization & management to automate the workflow with minimal human interaction.

  11. Methane Post-Processing for Oxygen Loop Closure

    NASA Technical Reports Server (NTRS)

    Greenwood, Zachary W.; Abney, Morgan B.; Miller, Lee

    2016-01-01

    State-of-the-art United States Atmospheric Revitalization carbon dioxide (CO2) reduction is based on the Sabatier reaction process, which recovers approximately 50% of the oxygen (O2) from crew metabolic CO2. Oxygen recovery from carbon dioxide is constrained by the limited availability of reactant hydrogen. Post-processing of methane to recover hydrogen with the Umpqua Research Company Plasma Pyrolysis Assembly (PPA) has the potential to further close the Atmospheric Revitalization oxygen loop. The PPA decomposes methane into hydrogen and hydrocarbons, predominantly acetylene, and a small amount of solid carbon. The hydrogen must then be purified before it can be recycled for additional oxygen recovery. Long duration testing and evaluation of a four crew-member sized PPA and a discussion of hydrogen recycling system architectures are presented.

  12. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  13. Mechanical Behavior of Additively Manufactured Uranium-6 wt. pct. Niobium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, A. S.; Wraith, M. W.; Burke, S. C.

    This report describes an effort to process uranium-6 weight% niobium using laser powder bed fusion. The chemistry, crystallography, microstructure and mechanical response resulting from this process are discussed with particular emphasis on the effect of the laser powder bed fusion process on impurities. In an effort to achieve homogenization and uniform mechanical behavior from different builds, as well as to induce a more conventional loading response, we explore post-processing heat treatments on this complex alloy. Elevated temperature heat treatment for recrystallization is evaluated and the effect of recrystallization on mechanical behavior in laser powder bed fusion processed U-6Nb is discussed.more » Wrought-like mechanical behavior and grain sizes are achieved through post-processing and are reported herein.« less

  14. Buried object detection in GPR images

    DOEpatents

    Paglieroni, David W; Chambers, David H; Bond, Steven W; Beer, W. Reginald

    2014-04-29

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  15. Reducing the Requirements and Cost of Astronomical Telescopes

    NASA Technical Reports Server (NTRS)

    Smith, W. Scott; Whitakter, Ann F. (Technical Monitor)

    2002-01-01

    Limits on astronomical telescope apertures are being rapidly approached. These limits result from logistics, increasing complexity, and finally budgetary constraints. In an historical perspective, great strides have been made in the area of aperture, adaptive optics, wavefront sensors, detectors, stellar interferometers and image reconstruction. What will be the next advances? Emerging data analysis techniques based on communication theory holds the promise of yielding more information from observational data based on significant computer post-processing. This paper explores some of the current telescope limitations and ponders the possibilities increasing the yield of scientific data based on the migration computer post-processing techniques to higher dimensions. Some of these processes hold the promise of reducing the requirements on the basic telescope hardware making the next generation of instruments more affordable.

  16. Space Launch System Ascent Static Aerodynamic Database Development

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T.; Bennett, David W.; Blevins, John A.; Erickson, Gary E.; Favaregh, Noah M.; Houlden, Heather P.; Tomek, William G.

    2014-01-01

    This paper describes the wind tunnel testing work and data analysis required to characterize the static aerodynamic environment of NASA's Space Launch System (SLS) ascent portion of flight. Scaled models of the SLS have been tested in transonic and supersonic wind tunnels to gather the high fidelity data that is used to build aerodynamic databases. A detailed description of the wind tunnel test that was conducted to produce the latest version of the database is presented, and a representative set of aerodynamic data is shown. The wind tunnel data quality remains very high, however some concerns with wall interference effects through transonic Mach numbers are also discussed. Post-processing and analysis of the wind tunnel dataset are crucial for the development of a formal ascent aerodynamics database.

  17. Dynamic image fusion and general observer preference

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2010-04-01

    Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.

  18. Development of methods for the analysis of multi-mode TFM images

    NASA Astrophysics Data System (ADS)

    Sy, K.; Bredif, P.; Iakovleva, E.; Roy, O.; Lesselier, D.

    2018-05-01

    TFM (Total Focusing Method) is an advanced post-processing imaging algorithm of ultrasonic array data that shows good potential in defect detection and characterization. It can be employed using an infinite number of paths between transducer and focusing point. Depending upon the geometry and the characteristics of the defect in a given part, there are not the same modes that are appropriate for the defect reconstruction. Furthermore, non-physical indications can be observed, prone to misinterpretation. These imaging artifacts are due to the coexistence of several contributions involving several modes of propagation and interactions with possible defects and/or the geometry of the part. Two methods for filtering artifacts and reducing the number of TFM images are developed and illustrated.

  19. Forecasting, Forecasting

    Treesearch

    Michael A. Fosberg

    1987-01-01

    Future improvements in the meteorological forecasts used in fire management will come from improvements in three areas: observational systems, forecast techniques, and postprocessing of forecasts and better integration of this information into the fire management process.

  20. Quantitative Characterizations of Ultrashort Echo (UTE) Images for Supporting Air-Bone Separation in the Head

    PubMed Central

    Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.

    2015-01-01

    Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205

Top