Sample records for subsequent computational analysis

  1. A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription

    ERIC Educational Resources Information Center

    Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.

    2012-01-01

    The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…

  2. A computer-controlled scintiscanning system and associated computer graphic techniques for study of regional distribution of blood flow.

    NASA Technical Reports Server (NTRS)

    Coulam, C. M.; Dunnette, W. H.; Wood, E. H.

    1970-01-01

    Two methods whereby a digital computer may be used to regulate a scintiscanning process are discussed from the viewpoint of computer input-output software. The computer's function, in this case, is to govern the data acquisition and storage, and to display the results to the investigator in a meaningful manner, both during and subsequent to the scanning process. Several methods (such as three-dimensional maps, contour plots, and wall-reflection maps) have been developed by means of which the computer can graphically display the data on-line, for real-time monitoring purposes, during the scanning procedure and subsequently for detailed analysis of the data obtained. A computer-governed method for converting scintiscan data recorded over the dorsal or ventral surfaces of the thorax into fractions of pulmonary blood flow traversing the right and left lungs is presented.

  3. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  4. Analysis of Compton continuum measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gold, R.; Olson, I. K.

    1970-01-01

    Five computer programs: COMPSCAT, FEND, GABCO, DOSE, and COMPLOT, have been developed and used for the analysis and subsequent reduction of measured energy distributions of Compton recoil electrons to continuous gamma spectra. In addition to detailed descriptions of these computer programs, the relationship amongst these codes is stressed. The manner in which these programs function is illustrated by tracing a sample measurement through a complete cycle of the data-reduction process.

  5. The use of computer graphics in the visual analysis of the proposed Sunshine Ski Area expansion

    Treesearch

    Mark Angelo

    1979-01-01

    This paper describes the use of computer graphics in designing part of the Sunshine Ski Area in Banff National Park. The program used was capable of generating perspective landscape drawings from a number of different viewpoints. This allowed managers to predict, and subsequently reduce, the adverse visual impacts of ski-run development. Computer graphics have proven,...

  6. NEDLite user's manual: forest inventory for Palm OS handheld computers

    Treesearch

    Peter D. Knopp; Mark J. Twery

    2006-01-01

    A user's manual for NEDLite, software that enables collection of forest inventory data on Palm OS handheld computers, with the option of transferring data into NED software for analysis and subsequent prescription development. NEDLite software is included. Download the NEDLite software at: http://www.fs.fed.us/ne/burlington/ned

  7. Submillisecond Optical Knife-Edge Testing

    NASA Technical Reports Server (NTRS)

    Thurlow, P.

    1983-01-01

    Fast computer-controlled sampling of optical knife-edge response (KER) signal increases accuracy of optical system aberration measurement. Submicrosecond-response detectors in optical focal plane convert optical signals to electrical signals converted to digital data, sampled and feed into computer for storage and subsequent analysis. Optical data are virtually free of effects of index-of-refraction gradients.

  8. Computational Analysis of Gravitational Effects in Low-Density Gas Jets

    NASA Technical Reports Server (NTRS)

    Satti, Rajani P.; Agrawal, Ajay K.

    2004-01-01

    This study deals with the computational analysis of buoyancy-induced instability in the nearfield of an isothermal helium jet injected into quiescent ambient air environment. Laminar, axisymmetric, unsteady flow conditions were considered for the analysis. The transport equations of helium mass fraction coupled with the conservation equations of mixture mass and momentum were solved using a staggered grid finite volume method. The jet Richardson numbers of 1.5 and 0.018 were considered to encompass both buoyant and inertial jet flow regimes. Buoyancy effects were isolated by initiating computations in Earth gravity and subsequently, reducing gravity to simulate the microgravity conditions. Computed results concur with experimental observations that the periodic flow oscillations observed in Earth gravity subside in microgravity.

  9. Comparative Modeling of Proteins: A Method for Engaging Students' Interest in Bioinformatics Tools

    ERIC Educational Resources Information Center

    Badotti, Fernanda; Barbosa, Alan Sales; Reis, André Luiz Martins; do Valle, Ítalo Faria; Ambrósio, Lara; Bitar, Mainá

    2014-01-01

    The huge increase in data being produced in the genomic era has produced a need to incorporate computers into the research process. Sequence generation, its subsequent storage, interpretation, and analysis are now entirely computer-dependent tasks. Universities from all over the world have been challenged to seek a way of encouraging students to…

  10. Low-cost digital image processing at the University of Oklahoma

    NASA Technical Reports Server (NTRS)

    Harrington, J. A., Jr.

    1981-01-01

    Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.

  11. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  12. Identifying controlling variables for math computation fluency through experimental analysis: the interaction of stimulus control and reinforcing consequences.

    PubMed

    Hofstadter-Duke, Kristi L; Daly, Edward J

    2015-03-01

    This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.

  13. Inlet Development for a Rocket Based Combined Cycle, Single Stage to Orbit Vehicle Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    DeBonis, J. R.; Trefny, C. J.; Steffen, C. J., Jr.

    1999-01-01

    Design and analysis of the inlet for a rocket based combined cycle engine is discussed. Computational fluid dynamics was used in both the design and subsequent analysis. Reynolds averaged Navier-Stokes simulations were performed using both perfect gas and real gas assumptions. An inlet design that operates over the required Mach number range from 0 to 12 was produced. Performance data for cycle analysis was post processed using a stream thrust averaging technique. A detailed performance database for cycle analysis is presented. The effect ot vehicle forebody compression on air capture is also examined.

  14. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  15. Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers

    NASA Technical Reports Server (NTRS)

    Guru Prasad, K.; Kane, J. H.

    1992-01-01

    The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.

  16. Differential Measurement Periodontal Structures Mapping System

    NASA Technical Reports Server (NTRS)

    Companion, John A. (Inventor)

    1998-01-01

    This invention relates to a periodontal structure mapping system employing a dental handpiece containing first and second acoustic sensors for locating the Cemento-Enamel Junction (CEJ) and measuring the differential depth between the CEJ and the bottom of the periodontal pocket. Measurements are taken at multiple locations on each tooth of a patient, observed, analyzed by an optical analysis subsystem, and archived by a data storage system for subsequent study and comparison with previous and subsequent measurements. Ultrasonic transducers for the first and second acoustic sensors are contained within the handpiece and in connection with a control computer. Pressurized water is provided for the depth measurement sensor and a linearly movable probe sensor serves as the sensor for the CEJ finder. The linear movement of the CEJ sensor is obtained by a control computer actuated by the prober. In an alternate embodiment, the CEJ probe is an optical fiber sensor with appropriate analysis structure provided therefor.

  17. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  18. The detection and analysis of point processes in biological signals

    NASA Technical Reports Server (NTRS)

    Anderson, D. J.; Correia, M. J.

    1977-01-01

    A pragmatic approach to the detection and analysis of discrete events in biomedical signals is taken. Examples from both clinical and basic research are provided. Introductory sections discuss not only discrete events which are easily extracted from recordings by conventional threshold detectors but also events embedded in other information carrying signals. The primary considerations are factors governing event-time resolution and the effects limits to this resolution have on the subsequent analysis of the underlying process. The analysis portion describes tests for qualifying the records as stationary point processes and procedures for providing meaningful information about the biological signals under investigation. All of these procedures are designed to be implemented on laboratory computers of modest computational capacity.

  19. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  20. Development of an engineering analysis of progressive damage in composites during low velocity impact

    NASA Technical Reports Server (NTRS)

    Humphreys, E. A.

    1981-01-01

    A computerized, analytical methodology was developed to study damage accumulation during low velocity lateral impact of layered composite plates. The impact event was modeled as perfectly plastic with complete momentum transfer to the plate structure. A transient dynamic finite element approach was selected to predict the displacement time response of the plate structure. Composite ply and interlaminar stresses were computed at selected time intervals and subsequently evaluated to predict layer and interlaminar damage. The effects of damage on elemental stiffness were then incorporated back into the analysis for subsequent time steps. Damage predicted included fiber failure, matrix ply failure and interlaminar delamination.

  1. CLAYFORM: a FORTRAN 77 computer program apportioning the constituents in the chemical analysis of a clay or other silicate mineral into a structural formula

    USGS Publications Warehouse

    Bodine, M.W.

    1987-01-01

    The FORTRAN 77 computer program CLAYFORM apportions the constituents of a conventional chemical analysis of a silicate mineral into a user-selected structure formula. If requested, such as for a clay mineral or other phyllosilicate, the program distributes the structural formula components into appropriate default or user-specified structural sites (tetrahedral, octahedral, interlayer, hydroxyl, and molecular water sites), and for phyllosilicates calculates the layer (tetrahedral, octahedral, and interlayer) charge distribution. The program also creates data files of entered analyses for subsequent reuse. ?? 1987.

  2. Geometry program for aerodynamic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1973-01-01

    A computer program that provides the geometry and boundary conditions appropriate for an analysis of a lifting, thin wing with control surfaces in linearized, subsonic, steady flow is presented. The kernel function method lifting surface theory is applied. The data which is generated by the program is stored on disk files or tapes for later use by programs which calculate an influence matrix, plot the wing planform, and evaluate the loads on the wing. In addition to processing data for subsequent use in a lifting surface analysis, the program is useful for computing area and mean geometric chords of the wing and control surfaces.

  3. Developments in the application of the geometrical theory of diffraction and computer graphics to aircraft inter-antenna coupling analysis

    NASA Astrophysics Data System (ADS)

    Bogusz, Michael

    1993-01-01

    The need for a systematic methodology for the analysis of aircraft electromagnetic compatibility (EMC) problems is examined. The available computer aids used in aircraft EMC analysis are assessed and a theoretical basis is established for the complex algorithms which identify and quantify electromagnetic interactions. An overview is presented of one particularly well established aircraft antenna to antenna EMC analysis code, the Aircraft Inter-Antenna Propagation with Graphics (AAPG) Version 07 software. The specific new algorithms created to compute cone geodesics and their associated path losses and to graph the physical coupling path are discussed. These algorithms are validated against basic principles. Loss computations apply the uniform geometrical theory of diffraction and are subsequently compared to measurement data. The increased modelling and analysis capabilities of the newly developed AAPG Version 09 are compared to those of Version 07. Several models of real aircraft, namely the Electronic Systems Trainer Challenger, are generated and provided as a basis for this preliminary comparative assessment. Issues such as software reliability, algorithm stability, and quality of hardcopy output are also discussed.

  4. Response surface method in geotechnical/structural analysis, phase 1

    NASA Astrophysics Data System (ADS)

    Wong, F. S.

    1981-02-01

    In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.

  5. Prosodic analysis by rule

    NASA Astrophysics Data System (ADS)

    Lindsay, D.

    1985-02-01

    Research on the automatic computer analysis of intonation using linguistic knowledge is described. The use of computer programs to analyze and classify fundamental frequency (FO) contours, and work on the psychophysics of British English intonation and on the phonetics of FO contours are described. Results suggest that FO can be conveniently tracked to represent intonation through time, which can be subsequently used by a computer program as the basis for analysis. Nuclear intonation, where the intonational nucleus is the region of auditory prominence, or information focus, found in all spoken sentences was studied. The main mechanism behind such prominence is the perception of an extensive FO movement on the nuclear syllable. A classification of the nuclear contour shape is a classification of the sentence type, often into categories that cannot be readily determined from only the segmental phonemes of the utterance.

  6. Creation of a computer self-efficacy measure: analysis of internal consistency, psychometric properties, and validity.

    PubMed

    Howard, Matt C

    2014-10-01

    Computer self-efficacy is an often studied construct that has been shown to be related to an array of important individual outcomes. Unfortunately, existing measures of computer self-efficacy suffer from several deficiencies, including criterion contamination, outdated wording, and/or inadequate psychometric properties. For this reason, the current article presents the creation of a new computer self-efficacy measure. In Study 1, an over-representative item list is created and subsequently reduced through exploratory factor analysis to create an initial measure, and the discriminant validity of this initial measure is tested. In Study 2, the unidimensional factor structure of the initial measure is supported through confirmatory factor analysis and further reduced into a final, 12-item measure. In Study 3, the convergent and criterion validity of the 12-item measure is tested. Overall, this three study process demonstrates that the new computer self-efficacy measure has superb psychometric properties and internal reliability, and demonstrates excellent evidence for several aspects of validity. It is hoped that the 12-item computer self-efficacy measure will be utilized in future research on computer self-efficacy, which is discussed in the current article.

  7. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  8. Laboratory Connections: Using LOGO in the Science Laboratory.

    ERIC Educational Resources Information Center

    Kolodiy, George Oleh

    1991-01-01

    Described is a LOGO computer program that enables students to investigate the relationship between a digital number and the resistance in a variable resistor used to generate that number. Likewise, actual temperature readings and the corresponding resistance within a thermistor can be used for data gathering and subsequent analysis. (JJK)

  9. Development and implementation of a low cost micro computer system for LANDSAT analysis and geographic data base applications

    NASA Technical Reports Server (NTRS)

    Faust, N.; Jordon, L.

    1981-01-01

    Since the implementation of the GRID and IMGRID computer programs for multivariate spatial analysis in the early 1970's, geographic data analysis subsequently moved from large computers to minicomputers and now to microcomputers with radical reduction in the costs associated with planning analyses. Programs designed to process LANDSAT data to be used as one element in a geographic data base were used once NIMGRID (new IMGRID), a raster oriented geographic information system, was implemented on the microcomputer. Programs for training field selection, supervised and unsupervised classification, and image enhancement were added. Enhancements to the color graphics capabilities of the microsystem allow display of three channels of LANDSAT data in color infrared format. The basic microcomputer hardware needed to perform NIMGRID and most LANDSAT analyses is listed as well as the software available for LANDSAT processing.

  10. High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems

    NASA Technical Reports Server (NTRS)

    Makivic, Miloje S.

    1996-01-01

    This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix.

  11. Marking parts to aid robot vision

    NASA Technical Reports Server (NTRS)

    Bales, J. W.; Barker, L. K.

    1981-01-01

    The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.

  12. The effects of an educational meeting and subsequent computer reminders on the ordering of laboratory tests by rheumatologists: an interrupted time series analysis.

    PubMed

    Lesuis, Nienke; den Broeder, Nathan; Boers, Nadine; Piek, Ester; Teerenstra, Steven; Hulscher, Marlies; van Vollenhoven, Ronald; den Broeder, Alfons A

    2017-01-01

    To examine the effects of an educational meeting and subsequent computer reminders on the number of ordered laboratory tests. Using interrupted time series analysis we assessed whether trends in the number of laboratory tests ordered by rheumatologists between September 2012 and September 2015 at the Sint Maartenskliniek (the Netherlands) changed following an educational meeting (September 2013) and the introduction of computer reminders into the Computerised Physician Order Entry System (July 2014). The analyses were done for the set of tests on which both interventions had focussed (intervention tests; complement, cryoglobulins, immunoglobins, myeloma protein) and a set of control tests unrelated to the interventions (alanine transferase, anti-cyclic citrullinated peptide, C-reactive protein, creatine, haemoglobin, leukocytes, mean corpuscular volume, rheumatoid factor and thrombocytes). At the start of the study, 101 intervention tests and 7660 control tests were ordered per month by the rheumatologists. After the educational meeting, both the level and trend of ordered intervention and control tests did not change significantly. After implementation of the reminders, the level of ordered intervention tests decreased by 85.0 tests (95%-CI -133.3 to -36.8, p<0.01), the level of control tests did not change following the introduction of reminders. In summary, an educational meeting alone was not effective in decreasing the number of ordered intervention tests, but the combination with computer reminders did result in a large decrease of those tests. Therefore, we recommend using computer reminders in addition to education if reduction of inappropriate test use is aimed for.

  13. The measurement of boundary layers on a compressor blade in cascade. Volume 1: Experimental technique, analysis and results

    NASA Technical Reports Server (NTRS)

    Zierke, William C.; Deutsch, Steven

    1989-01-01

    Measurements were made of the boundary layers and wakes about a highly loaded, double-circular-arc compressor blade in cascade. These laser Doppler velocimetry measurements have yielded a very detailed and precise data base with which to test the application of viscous computational codes to turbomachinery. In order to test the computational codes at off-design conditions, the data were acquired at a chord Reynolds number of 500,000 and at three incidence angles. Moreover, these measurements have supplied some physical insight into these very complex flows. Although some natural transition is evident, laminar boundary layers usually detach and subsequently reattach as either fully or intermittently turbulent boundary layers. These transitional separation bubbles play an important role in the development of most of the boundary layers and wakes measured in this cascade and the modeling or computing of these bubbles should prove to be the key aspect in computing the entire cascade flow field. In addition, the nonequilibrium turbulent boundary layers on these highly loaded blades always have some region of separation near the trailing edge of the suction surface. These separated flows, as well as the subsequent near wakes, show no similarity and should prove to be a challenging test for the viscous computational codes.

  14. Route profile analysis system and method

    DOEpatents

    Mullenhoff, Donald J.; Wilson, Stephen W.

    1986-01-01

    A system for recording terrain profile information is disclosed. The system accurately senses incremental distances traveled by a vehicle along with vehicle inclination, recording both with elapsed time. The incremental distances can subsequently be differentiated with respect to time to obtain acceleration. The acceleration can then be used by the computer to correct the sensed inclination.

  15. Route profile analysis system and method

    DOEpatents

    Mullenhoff, D.J.; Wilson, S.W.

    1982-07-29

    A system for recording terrain profile information is disclosed. The system accurately senses incremental distances traveled by a vehicle along with vehicle inclination, recording both with elapsed time. The incremental distances can subsequently be differentiated with respect to time to obtain acceleration. The computer acceleration can then be used to correct the sensed inclination.

  16. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  17. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  18. The Use of Computer-Mediated Communication To Enhance Subsequent Face-to-Face Discussions.

    ERIC Educational Resources Information Center

    Dietz-Uhler, Beth; Bishop-Clark, Cathy

    2001-01-01

    Describes a study of undergraduate students that assessed the effects of synchronous (Internet chat) and asynchronous (Internet discussion board) computer-mediated communication on subsequent face-to-face discussions. Results showed that face-to-face discussions preceded by computer-mediated communication were perceived to be more enjoyable.…

  19. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  20. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  1. Dissecting Sequences of Regulation and Cognition: Statistical Discourse Analysis of Primary School Children's Collaborative Learning

    ERIC Educational Resources Information Center

    Molenaar, Inge; Chiu, Ming Ming

    2014-01-01

    Extending past research showing that regulative activities (metacognitive and relational) can aid learning, this study tests whether sequences of cognitive, metacognitive and relational activities affect subsequent cognition. Scaffolded by a computer avatar, 54 primary school students (working in 18 groups of 3) discussed writing a report about a…

  2. Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress

    PubMed Central

    Fu, Longwen; Liu, Zuoyi

    2018-01-01

    Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612

  3. From the CMS Computing Experience in the WLCG STEP'09 Challenge to the First Data Taking of the LHC Era

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Gutsche, O.

    The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.

  4. Elastic-plastic analysis of a propagating crack under cyclic loading

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Armen, H., Jr.

    1974-01-01

    Development and application of a two-dimensional finite-element analysis to predict crack-closure and crack-opening stresses during specified histories of cyclic loading. An existing finite-element computer program which accounts for elastic-plastic material behavior under cyclic loading was modified to account for changing boundary conditions - crack growth and intermittent contact of crack surfaces. This program was subsequently used to study the crack-closure behavior under constant-amplitude and simple block-program loading.

  5. Automatic microscopy for mitotic cell location.

    NASA Technical Reports Server (NTRS)

    Herron, J.; Ranshaw, R.; Castle, J.; Wald, N.

    1972-01-01

    Advances are reported in the development of an automatic microscope with which to locate hematologic or other cells in mitosis for subsequent chromosome analysis. The system under development is designed to perform the functions of: slide scanning to locate metaphase cells; conversion of images of selected cells into binary form; and on-line computer analysis of the digitized image for significant cytogenetic data. Cell detection criteria are evaluated using a test sample of 100 mitotic cells and 100 artifacts.

  6. Association between background parenchymal enhancement of breast MRI and BIRADS rating change in the subsequent screening

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Stoug, Rebecca G.; Pearce, Melanie; Liu, Hong; Zheng, Bin

    2018-03-01

    Although breast magnetic resonance imaging (MRI) has been used as a breast cancer screening modality for high-risk women, its cancer detection yield remains low (i.e., <= 3%). Thus, increasing breast MRI screening efficacy and cancer detection yield is an important clinical issue in breast cancer screening. In this study, we investigated association between the background parenchymal enhancement (BPE) of breast MRI and the change of diagnostic (BIRADS) status in the next subsequent breast MRI screening. A dataset with 65 breast MRI screening cases was retrospectively assembled. All cases were rated BIRADS-2 (benign findings). In the subsequent screening, 4 cases were malignant (BIRADS-6), 48 remained BIRADS-2 and 13 were downgraded to negative (BIRADS-1). A computer-aided detection scheme was applied to process images of the first set of breast MRI screening. Total of 33 features were computed including texture feature and global BPE features. Texture features were computed from either a gray-level co-occurrence matrix or a gray level run length matrix. Ten global BPE features were also initially computed from two breast regions and bilateral difference between the left and right breasts. Box-plot based analysis shows positive association between texture features and BIRADS rating levels in the second screening. Furthermore, a logistic regression model was built using optimal features selected by a CFS based feature selection method. Using a leave-one-case-out based cross-validation method, classification yielded an overall 75% accuracy in predicting the improvement (or downgrade) of diagnostic status (to BIRAD-1) in the subsequent breast MRI screening. This study demonstrated potential of developing a new quantitative imaging marker to predict diagnostic status change in the short-term, which may help eliminate a high fraction of unnecessary repeated breast MRI screenings and increase the cancer detection yield.

  7. Numerical Analysis of Incipient Separation on 53 Deg Swept Diamond Wing

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.

    2015-01-01

    A systematic analysis of incipient separation and subsequent vortex formation from moderately swept blunt leading edges is presented for a 53 deg swept diamond wing. This work contributes to a collective body of knowledge generated within the NATO/STO AVT-183 Task Group titled 'Reliable Prediction of Separated Flow Onset and Progression for Air and Sea Vehicles'. The objective is to extract insights from the experimentally measured and numerically computed flow fields that might enable turbulence experts to further improve their models for predicting swept blunt leading-edge flow separation. Details of vortex formation are inferred from numerical solutions after establishing a good correlation of the global flow field and surface pressure distributions between wind tunnel measurements and computed flow solutions. From this, significant and sometimes surprising insights into the nature of incipient separation and part-span vortex formation are derived from the wealth of information available in the computational solutions.

  8. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  9. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  10. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  11. COMPUTATIONAL TOXICOLOGY - OBJECTIVE 2: DEVELOPING APPROACHES FOR PRIORITIZING CHEMICALS FOR SUBSEQUENT SCREENING AND TESTING

    EPA Science Inventory

    One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...

  12. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  13. A Technical Review of Cellular Radio and Analysis of a Possible Protocol

    DTIC Science & Technology

    1992-09-01

    9 1. The Pioneers ............. ................................... 9 2. Time line of Radio Evolution...cellular telephone. Advances in low-power radio transmission and the speed with which modern computers can aid in frequency management and signal...lecturer at the Royal Institution in London. He subsequently worked his way up to lecturer and devoted ever increasing amounts of time to experiments

  14. Multivariate interactive digital analysis system /MIDAS/ - A new fast multispectral recognition system

    NASA Technical Reports Server (NTRS)

    Kriegler, F.; Marshall, R.; Lampert, S.; Gordon, M.; Cornell, C.; Kistler, R.

    1973-01-01

    The MIDAS system is a prototype, multiple-pipeline digital processor mechanizing the multivariate-Gaussian, maximum-likelihood decision algorithm operating at 200,000 pixels/second. It incorporates displays and film printer equipment under control of a general purpose midi-computer and possesses sufficient flexibility that operational versions of the equipment may be subsequently specified as subsets of the system.

  15. Validation of the Australian diagnostic reference levels for paediatric multi detector computed tomography: a comparison of RANZCR QUDI data and subsequent NDRLS data from 2012 to 2015.

    PubMed

    Anna, Hayton; Wallace, Anthony; Thomas, Peter

    2017-03-01

    The national diagnostic reference level service (NDRLS), was launched in 2011, however no paediatric data were submitted during the first calendar year of operation. As such, Australian national diagnostic reference levels (DRLs), for paediatric multi detector computed tomography (MDCT), were established using data obtained from a Royal Australian and New Zealand College of Radiologists (RANZCR), Quality Use of Diagnostic Imaging (QUDI), study. Paediatric data were submitted to the NDRLS in 2012 through 2015. An analysis has been made of the NDRLS paediatric data using the same method as was used to analyse the QUDI data to establish the Australian national paediatric DRLs for MDCT. An analysis of the paediatric NDRLS data has also been made using the method used to calculate the Australian national adult DRLs for MDCT. A comparison between the QUDI data and subsequent NDRLS data shows the NDRLS data to be lower on average for the Head and AbdoPelvis protocol and similar for the chest protocol. Using an average of NDRLS data submitted between 2012 and 2015 implications for updated paediatric DRLS are considered.

  16. A theoretical method for the analysis and design of axisymmetric bodies. [flow distribution and incompressible fluids

    NASA Technical Reports Server (NTRS)

    Beatty, T. D.

    1975-01-01

    A theoretical method is presented for the computation of the flow field about an axisymmetric body operating in a viscous, incompressible fluid. A potential flow method was used to determine the inviscid flow field and to yield the boundary conditions for the boundary layer solutions. Boundary layer effects in the forces of displacement thickness and empirically modeled separation streamlines are accounted for in subsequent potential flow solutions. This procedure is repeated until the solutions converge. An empirical method was used to determine base drag allowing configuration drag to be computed.

  17. Summary and Statistical Analysis of the First AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Morgenstern, John M.

    2014-01-01

    A summary is provided for the First AIAA Sonic Boom Workshop held 11 January 2014 in conjunction with AIAA SciTech 2014. Near-field pressure signatures extracted from computational fluid dynamics solutions are gathered from nineteen participants representing three countries for the two required cases, an axisymmetric body and simple delta wing body. Structured multiblock, unstructured mixed-element, unstructured tetrahedral, overset, and Cartesian cut-cell methods are used by the participants. Participants provided signatures computed on participant generated and solution adapted grids. Signatures are also provided for a series of uniformly refined workshop provided grids. These submissions are propagated to the ground and loudness measures are computed. This allows the grid convergence of a loudness measure and a validation metric (dfference norm between computed and wind tunnel measured near-field signatures) to be studied for the first time. Statistical analysis is also presented for these measures. An optional configuration includes fuselage, wing, tail, flow-through nacelles, and blade sting. This full configuration exhibits more variation in eleven submissions than the sixty submissions provided for each required case. Recommendations are provided for potential improvements to the analysis methods and a possible subsequent workshop.

  18. A Computational and Experimental Investigation of Shear Coaxial Jet Atomization

    NASA Technical Reports Server (NTRS)

    Ibrahim, Essam A.; Kenny, R. Jeremy; Walker, Nathan B.

    2006-01-01

    The instability and subsequent atomization of a viscous liquid jet emanated into a high-pressure gaseous surrounding is studied both computationally and experimentally. Liquid water issued into nitrogen gas at elevated pressures is used to simulate the flow conditions in a coaxial shear injector element relevant to liquid propellant rocket engines. The theoretical analysis is based on a simplified mathematical formulation of the continuity and momentum equations in their conservative form. Numerical solutions of the governing equations subject to appropriate initial and boundary conditions are obtained via a robust finite difference scheme. The computations yield real-time evolution and subsequent breakup characteristics of the liquid jet. The experimental investigation utilizes a digital imaging technique to measure resultant drop sizes. Data were collected for liquid Reynolds number between 2,500 and 25,000, aerodynamic Weber number range of 50-500 and ambient gas pressures from 150 to 1200 psia. Comparison of the model predictions and experimental data for drop sizes at gas pressures of 150 and 300 psia reveal satisfactory agreement particularly for lower values of investigated Weber number. The present model is intended as a component of a practical tool to facilitate design and optimization of coaxial shear atomizers.

  19. Faster sequence homology searches by clustering subsequences.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2015-04-15

    Sequence homology searches are used in various fields. New sequencing technologies produce huge amounts of sequence data, which continuously increase the size of sequence databases. As a result, homology searches require large amounts of computational time, especially for metagenomic analysis. We developed a fast homology search method based on database subsequence clustering, and implemented it as GHOSTZ. This method clusters similar subsequences from a database to perform an efficient seed search and ungapped extension by reducing alignment candidates based on triangle inequality. The database subsequence clustering technique achieved an ∼2-fold increase in speed without a large decrease in search sensitivity. When we measured with metagenomic data, GHOSTZ is ∼2.2-2.8 times faster than RAPSearch and is ∼185-261 times faster than BLASTX. The source code is freely available for download at http://www.bi.cs.titech.ac.jp/ghostz/ akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  20. Modeling cation/anion-water interactions in functional aluminosilicate structures.

    PubMed

    Richards, A J; Barnes, P; Collins, D R; Christodoulos, F; Clark, S M

    1995-02-01

    A need for the computer simulation of hydration/dehydration processes in functional aluminosilicate structures has been noted. Full and realistic simulations of these systems can be somewhat ambitious and require the aid of interactive computer graphics to identify key structural/chemical units, both in the devising of suitable water-ion simulation potentials and in the analysis of hydrogen-bonding schemes in the subsequent simulation studies. In this article, the former is demonstrated by the assembling of a range of essential water-ion potentials. These span the range of formal charges from +4e to -2e, and are evaluated in the context of three types of structure: a porous zeolite, calcium silicate cement, and layered clay. As an example of the latter, the computer graphics output from Monte Carlo computer simulation studies of hydration/dehydration in calcium-zeolite A is presented.

  1. Computational mass spectrometry for small molecules

    PubMed Central

    2013-01-01

    The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the computational aspects of identifying small molecules, from the identification of a compound searching a reference spectral library, to the structural elucidation of unknowns. In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline. PMID:23453222

  2. Preliminary design methods for fiber reinforced composite structures employing a personal computer

    NASA Technical Reports Server (NTRS)

    Eastlake, C. N.

    1986-01-01

    The objective of this project was to develop a user-friendly interactive computer program to be used as an analytical tool by structural designers. Its intent was to do preliminary, approximate stress analysis to help select or verify sizing choices for composite structural members. The approach to the project was to provide a subroutine which uses classical lamination theory to predict an effective elastic modulus for a laminate of arbitrary material and ply orientation. This effective elastic modulus can then be used in a family of other subroutines which employ the familiar basic structural analysis methods for isotropic materials. This method is simple and convenient to use but only approximate, as is appropriate for a preliminary design tool which will be subsequently verified by more sophisticated analysis. Additional subroutines have been provided to calculate laminate coefficient of thermal expansion and to calculate ply-by-ply strains within a laminate.

  3. Interior thermal insulation systems for historical building envelopes

    NASA Astrophysics Data System (ADS)

    Jerman, Miloš; Solař, Miloš; Černý, Robert

    2017-11-01

    The design specifics of interior thermal insulation systems applied for historical building envelopes are described. The vapor-tight systems and systems based on capillary thermal insulation materials are taken into account as two basic options differing in building-physical considerations. The possibilities of hygrothermal analysis of renovated historical envelopes including laboratory methods, computer simulation techniques, and in-situ tests are discussed. It is concluded that the application of computational models for hygrothermal assessment of interior thermal insulation systems should always be performed with a particular care. On one hand, they present a very effective tool for both service life assessment and possible planning of subsequent reconstructions. On the other, the hygrothermal analysis of any historical building can involve quite a few potential uncertainties which may affect negatively the accuracy of obtained results.

  4. Inertial navigation sensor integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bhanu, Bir

    1992-01-01

    Recent work on INS integrated motion analysis is described. Results were obtained with a maximally passive system of obstacle detection (OD) for ground-based vehicles and rotorcraft. The OD approach involves motion analysis of imagery acquired by a passive sensor in the course of vehicle travel to generate range measurements to world points within the sensor FOV. INS data and scene analysis results are used to enhance interest point selection, the matching of the interest points, and the subsequent motion-based computations, tracking, and OD. The most important lesson learned from the research described here is that the incorporation of inertial data into the motion analysis program greatly improves the analysis and makes the process more robust.

  5. Descriptive and Criterion-Referenced Self-Assessment with L2 Readers

    ERIC Educational Resources Information Center

    Brantmeier, Cindy; Vanderplank, Robert

    2008-01-01

    Brantmeier [Brantmeier, C., 2006. "Advanced L2 learners and reading placement: self-assessment, computer-based testing, and subsequent performance." 'System 34" (1), 15-35] found that self-assessment (SA) of second language (L2) reading ability is not an accurate predictor for computer-based testing or subsequent classroom performance. With 359…

  6. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  7. Computational techniques for design optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.

  8. Computerized image analysis of cell-cell interactions in human renal tissue by using multi-channel immunoflourescent confocal microscopy

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Jiang, Yulei; Liarski, Vladimir M.; Kaverina, Natalya; Clark, Marcus R.; Giger, Maryellen L.

    2012-03-01

    Analysis of interactions between B and T cells in tubulointerstitial inflammation is important for understanding human lupus nephritis. We developed a computer technique to perform this analysis, and compared it with manual analysis. Multi-channel immunoflourescent-microscopy images were acquired from 207 regions of interest in 40 renal tissue sections of 19 patients diagnosed with lupus nephritis. Fresh-frozen renal tissue sections were stained with combinations of immunoflourescent antibodies to membrane proteins and counter-stained with a cell nuclear marker. Manual delineation of the antibodies was considered as the reference standard. We first segmented cell nuclei and cell membrane markers, and then determined corresponding cell types based on the distances between cell nuclei and specific cell-membrane marker combinations. Subsequently, the distribution of the shortest distance from T cell nuclei to B cell nuclei was obtained and used as a surrogate indicator of cell-cell interactions. The computer and manual analyses results were concordant. The average absolute difference was 1.1+/-1.2% between the computer and manual analysis results in the number of cell-cell distances of 3 μm or less as a percentage of the total number of cell-cell distances. Our computerized analysis of cell-cell distances could be used as a surrogate for quantifying cell-cell interactions as either an automated and quantitative analysis or for independent confirmation of manual analysis.

  9. [Application of virtual instrumentation technique in toxicological studies].

    PubMed

    Moczko, Jerzy A

    2005-01-01

    Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.

  10. Molecular modeling study of the differential ligand-receptor interaction at the μ, δ and κ opioid receptors

    NASA Astrophysics Data System (ADS)

    Filizola, Marta; Carteni-Farina, Maria; Perez, Juan J.

    1999-07-01

    3D models of the opioid receptors μ, δ and κ were constructed using BUNDLE, an in-house program to build de novo models of G-protein coupled receptors at the atomic level. Once the three opioid receptors were constructed and before any energy refinement, models were assessed for their compatibility with the results available from point-site mutations carried out on these receptors. In a subsequent step, three selective antagonists to each of three receptors (naltrindole, naltrexone and nor-binaltorphamine) were docked onto each of the three receptors and subsequently energy minimized. The nine resulting complexes were checked for their ability to explain known results of structure-activity studies. Once the models were validated, analysis of the distances between different residues of the receptors and the ligands were computed. This analysis permitted us to identify key residues tentatively involved in direct interaction with the ligand.

  11. Statistical significance of task related deep brain EEG dynamic changes in the time-frequency domain.

    PubMed

    Chládek, J; Brázdil, M; Halámek, J; Plešinger, F; Jurák, P

    2013-01-01

    We present an off-line analysis procedure for exploring brain activity recorded from intra-cerebral electroencephalographic data (SEEG). The objective is to determine the statistical differences between different types of stimulations in the time-frequency domain. The procedure is based on computing relative signal power change and subsequent statistical analysis. An example of characteristic statistically significant event-related de/synchronization (ERD/ERS) detected across different frequency bands following different oddball stimuli is presented. The method is used for off-line functional classification of different brain areas.

  12. Transferring data oscilloscope to an IBM using an Apple II+

    NASA Technical Reports Server (NTRS)

    Miller, D. L.; Frenklach, M. Y.; Laughlin, P. J.; Clary, D. W.

    1984-01-01

    A set of PASCAL programs permitting the use of a laboratory microcomputer to facilitate and control the transfer of data from a digital oscilloscope (used with photomultipliers in experiments on soot formation in hydrocarbon combustion) to a mainframe computer and the subsequent mainframe processing of these data is presented. Advantages of this approach include the possibility of on-line computations, transmission flexibility, automatic transfer and selection, increased capacity and analysis options (such as smoothing, averaging, Fourier transformation, and high-quality plotting), and more rapid availability of results. The hardware and software are briefly characterized, the programs are discussed, and printouts of the listings are provided.

  13. Analysis of pressure-flow data in terms of computer-derived urethral resistance parameters.

    PubMed

    van Mastrigt, R; Kranse, M

    1995-01-01

    The simultaneous measurement of detrusor pressure and flow rate during voiding is at present the only way to measure or grade infravesical obstruction objectively. Numerous methods have been introduced to analyze the resulting data. These methods differ in aim (measurement of urethral resistance and/or diagnosis of obstruction), method (manual versus computerized data processing), theory or model used, and resolution (continuously variable parameters or a limited number of classes, the so-called monogram). In this paper, some aspects of these fundamental differences are discussed and illustrated. Subsequently, the properties and clinical performance of two computer-based methods for deriving continuous urethral resistance parameters are treated.

  14. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  15. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  16. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  17. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data.

    PubMed

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-08-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.

  18. DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data

    PubMed Central

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-01-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089

  19. Analysis of the flight dynamics of the Solar Maximum Mission (SMM) off-sun scientific pointing

    NASA Technical Reports Server (NTRS)

    Pitone, D. S.; Klein, J. R.

    1989-01-01

    Algorithms are presented which were created and implemented by the Goddard Space Flight Center's (GSFC's) Solar Maximum Mission (SMM) attitude operations team to support large-angle spacecraft pointing at scientific objectives. The mission objective of the post-repair SMM satellite was to study solar phenomena. However, because the scientific instruments, such as the Coronagraph/Polarimeter (CP) and the Hard X ray Burst Spectrometer (HXRBS), were able to view objects other than the Sun, attitude operations support for attitude pointing at large angles from the nominal solar-pointing attitudes was required. Subsequently, attitude support for SMM was provided for scientific objectives such as Comet Halley, Supernova 1987A, Cygnus X-1, and the Crab Nebula. In addition, the analysis was extended to include the reverse problem, computing the right ascension and declination of a body given the off-Sun angles. This analysis led to the computation of the orbits of seven new solar comets seen in the field-of-view (FOV) of the CP. The activities necessary to meet these large-angle attitude-pointing sequences, such as slew sequence planning, viewing-period prediction, and tracking-bias computation are described. Analysis is presented for the computation of maneuvers and pointing parameters relative to the SMM-unique, Sun-centered reference frame. Finally, science data and independent attitude solutions are used to evaluate the large-angle pointing performance.

  20. Analysis of the flight dynamics of the Solar Maximum Mission (SMM) off-sun scientific pointing

    NASA Technical Reports Server (NTRS)

    Pitone, D. S.; Klein, J. R.; Twambly, B. J.

    1990-01-01

    Algorithms are presented which were created and implemented by the Goddard Space Flight Center's (GSFC's) Solar Maximum Mission (SMM) attitude operations team to support large-angle spacecraft pointing at scientific objectives. The mission objective of the post-repair SMM satellite was to study solar phenomena. However, because the scientific instruments, such as the Coronagraph/Polarimeter (CP) and the Hard X-ray Burst Spectrometer (HXRBS), were able to view objects other than the Sun, attitude operations support for attitude pointing at large angles from the nominal solar-pointing attitudes was required. Subsequently, attitude support for SMM was provided for scientific objectives such as Comet Halley, Supernova 1987A, Cygnus X-1, and the Crab Nebula. In addition, the analysis was extended to include the reverse problem, computing the right ascension and declination of a body given the off-Sun angles. This analysis led to the computation of the orbits of seven new solar comets seen in the field-of-view (FOV) of the CP. The activities necessary to meet these large-angle attitude-pointing sequences, such as slew sequence planning, viewing-period prediction, and tracking-bias computation are described. Analysis is presented for the computation of maneuvers and pointing parameters relative to the SMM-unique, Sun-centered reference frame. Finally, science data and independent attitude solutions are used to evaluate the larg-angle pointing performance.

  1. Early prediction of cerebral palsy by computer-based video analysis of general movements: a feasibility study.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander R; Taraldsen, Gunnar; Grunewaldt, Kristine H; Støen, Ragnhild

    2010-08-01

    The aim of this study was to investigate the predictive value of a computer-based video analysis of the development of cerebral palsy (CP) in young infants. A prospective study of general movements used recordings from 30 high-risk infants (13 males, 17 females; mean gestational age 31wks, SD 6wks; range 23-42wks) between 10 and 15 weeks post term when fidgety movements should be present. Recordings were analysed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analyses. CP status was reported at 5 years. Thirteen infants developed CP (eight hemiparetic, four quadriparetic, one dyskinetic; seven ambulatory, three non-ambulatory, and three unknown function), of whom one had fidgety movements. Variability of the centroid of motion had a sensitivity of 85% and a specificity of 71% in identifying CP. By combining this with variables reflecting the amount of motion, specificity increased to 88%. Nine out of 10 children with CP, and for whom information about functional level was available, were correctly predicted with regard to ambulatory and non-ambulatory function. Prediction of CP can be provided by computer-based video analysis in young infants. The method may serve as an objective and feasible tool for early prediction of CP in high-risk infants.

  2. Optical Interconnections for VLSI Computational Systems Using Computer-Generated Holography.

    NASA Astrophysics Data System (ADS)

    Feldman, Michael Robert

    Optical interconnects for VLSI computational systems using computer generated holograms are evaluated in theory and experiment. It is shown that by replacing particular electronic connections with free-space optical communication paths, connection of devices on a single chip or wafer and between chips or modules can be improved. Optical and electrical interconnects are compared in terms of power dissipation, communication bandwidth, and connection density. Conditions are determined for which optical interconnects are advantageous. Based on this analysis, it is shown that by applying computer generated holographic optical interconnects to wafer scale fine grain parallel processing systems, dramatic increases in system performance can be expected. Some new interconnection networks, designed to take full advantage of optical interconnect technology, have been developed. Experimental Computer Generated Holograms (CGH's) have been designed, fabricated and subsequently tested in prototype optical interconnected computational systems. Several new CGH encoding methods have been developed to provide efficient high performance CGH's. One CGH was used to decrease the access time of a 1 kilobit CMOS RAM chip. Another was produced to implement the inter-processor communication paths in a shared memory SIMD parallel processor array.

  3. Corra: Computational framework and tools for LC-MS discovery and targeted mass spectrometry-based proteomics

    PubMed Central

    Brusniak, Mi-Youn; Bodenmiller, Bernd; Campbell, David; Cooke, Kelly; Eddes, James; Garbutt, Andrew; Lau, Hollis; Letarte, Simon; Mueller, Lukas N; Sharma, Vagisha; Vitek, Olga; Zhang, Ning; Aebersold, Ruedi; Watts, Julian D

    2008-01-01

    Background Quantitative proteomics holds great promise for identifying proteins that are differentially abundant between populations representing different physiological or disease states. A range of computational tools is now available for both isotopically labeled and label-free liquid chromatography mass spectrometry (LC-MS) based quantitative proteomics. However, they are generally not comparable to each other in terms of functionality, user interfaces, information input/output, and do not readily facilitate appropriate statistical data analysis. These limitations, along with the array of choices, present a daunting prospect for biologists, and other researchers not trained in bioinformatics, who wish to use LC-MS-based quantitative proteomics. Results We have developed Corra, a computational framework and tools for discovery-based LC-MS proteomics. Corra extends and adapts existing algorithms used for LC-MS-based proteomics, and statistical algorithms, originally developed for microarray data analyses, appropriate for LC-MS data analysis. Corra also adapts software engineering technologies (e.g. Google Web Toolkit, distributed processing) so that computationally intense data processing and statistical analyses can run on a remote server, while the user controls and manages the process from their own computer via a simple web interface. Corra also allows the user to output significantly differentially abundant LC-MS-detected peptide features in a form compatible with subsequent sequence identification via tandem mass spectrometry (MS/MS). We present two case studies to illustrate the application of Corra to commonly performed LC-MS-based biological workflows: a pilot biomarker discovery study of glycoproteins isolated from human plasma samples relevant to type 2 diabetes, and a study in yeast to identify in vivo targets of the protein kinase Ark1 via phosphopeptide profiling. Conclusion The Corra computational framework leverages computational innovation to enable biologists or other researchers to process, analyze and visualize LC-MS data with what would otherwise be a complex and not user-friendly suite of tools. Corra enables appropriate statistical analyses, with controlled false-discovery rates, ultimately to inform subsequent targeted identification of differentially abundant peptides by MS/MS. For the user not trained in bioinformatics, Corra represents a complete, customizable, free and open source computational platform enabling LC-MS-based proteomic workflows, and as such, addresses an unmet need in the LC-MS proteomics field. PMID:19087345

  4. The electromagnetic force between two moving charges

    NASA Astrophysics Data System (ADS)

    Minkin, Leonid; Shapovalov, Alexander S.

    2018-05-01

    A simple model of parallel motion of two point charges and the subsequent analysis of the electromagnetic field transformation invariant quantity are considered. It is shown that ignoring the coupling of electric and magnetic fields, as is done in some introductory physics books, can lead to miscalculations of the force between moving charges. Conceptual and computational aspects of these issues are discussed, and implications to the design of electron beam devices are considered.

  5. A comparison of computer-assisted detection (CAD) programs for the identification of colorectal polyps: performance and sensitivity analysis, current limitations and practical tips for radiologists.

    PubMed

    Bell, L T O; Gandhi, S

    2018-06-01

    To directly compare the accuracy and speed of analysis of two commercially available computer-assisted detection (CAD) programs in detecting colorectal polyps. In this retrospective single-centre study, patients who had colorectal polyps identified on computed tomography colonography (CTC) and subsequent lower gastrointestinal endoscopy, were analysed using two commercially available CAD programs (CAD1 and CAD2). Results were compared against endoscopy to ascertain sensitivity and positive predictive value (PPV) for colorectal polyps. Time taken for CAD analysis was also calculated. CAD1 demonstrated a sensitivity of 89.8%, PPV of 17.6% and mean analysis time of 125.8 seconds. CAD2 demonstrated a sensitivity of 75.5%, PPV of 44.0% and mean analysis time of 84.6 seconds. The sensitivity and PPV for colorectal polyps and CAD analysis times can vary widely between current commercially available CAD programs. There is still room for improvement. Generally, there is a trade-off between sensitivity and PPV, and so further developments should aim to optimise both. Information on these factors should be made routinely available, so that an informed choice on their use can be made. This information could also potentially influence the radiologist's use of CAD results. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  6. Computational Approaches to Phenotyping

    PubMed Central

    Lussier, Yves A.; Liu, Yang

    2007-01-01

    The recent completion of the Human Genome Project has made possible a high-throughput “systems approach” for accelerating the elucidation of molecular underpinnings of human diseases, and subsequent derivation of molecular-based strategies to more effectively prevent, diagnose, and treat these diseases. Although altered phenotypes are among the most reliable manifestations of altered gene functions, research using systematic analysis of phenotype relationships to study human biology is still in its infancy. This article focuses on the emerging field of high-throughput phenotyping (HTP) phenomics research, which aims to capitalize on novel high-throughput computation and informatics technology developments to derive genomewide molecular networks of genotype–phenotype associations, or “phenomic associations.” The HTP phenomics research field faces the challenge of technological research and development to generate novel tools in computation and informatics that will allow researchers to amass, access, integrate, organize, and manage phenotypic databases across species and enable genomewide analysis to associate phenotypic information with genomic data at different scales of biology. Key state-of-the-art technological advancements critical for HTP phenomics research are covered in this review. In particular, we highlight the power of computational approaches to conduct large-scale phenomics studies. PMID:17202287

  7. A NASTRAN-based computer program for structural dynamic analysis of Horizontal Axis Wind Turbines

    NASA Technical Reports Server (NTRS)

    Lobitz, Don W.

    1995-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWT's). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower end rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWT's driven by turbulent winds.

  8. Liquid Microjunction Surface Sampling Probe Fluid Dynamics: Computational and Experimental Analysis of Coaxial Intercapillary Positioning Effects on Sample Manipulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ElNaggar, Mariam S; Barbier, Charlotte N; Van Berkel, Gary J

    A coaxial geometry liquid microjunction surface sampling probe (LMJ-SSP) enables direct extraction of analytes from surfaces for subsequent analysis by techniques like mass spectrometry. Solution dynamics at the probe-to-sample surface interface in the LMJ-SSP has been suspected to influence sampling efficiency and dispersion but has not been rigorously investigated. The effect on flow dynamics and analyte transport to the mass spectrometer caused by coaxial retraction of the inner and outer capillaries from each other and the surface during sampling with a LMJ-SSP was investigated using computational fluid dynamics and experimentation. A transparent LMJ-SSP was constructed to provide the means formore » visual observation of the dynamics of the surface sampling process. Visual observation, computational fluid dynamics (CFD) analysis, and experimental results revealed that inner capillary axial retraction from the flush position relative to the outer capillary transitioned the probe from a continuous sampling and injection mode through an intermediate regime to sample plug formationmode caused by eddy currents at the sampling end of the probe. The potential for analytical implementation of these newly discovered probe operational modes is discussed.« less

  9. Multisensor system for tunnel inspection

    NASA Astrophysics Data System (ADS)

    Idoux, Maurice

    2005-01-01

    The system is aimed at assisting inspection and monitoring of the degradation of tunnels in order to minimize maintenance and repair time. ATLAS 70 is a complete sensors/software package which enables thorough diagnosis of tunnel wall conditions. The data collected locally are stored on a computer hard disk for subsequent analysis in a remote location via elaborate dedicated software. The sensors and local computer are loaded onto a rail and/or road vehicle of specific design, i.e. with even travelling speed of 2 to 5 km/h. Originally, the system has been developed for the Paris Underground Company and has since been applied to rail and road tunnels, large town sewage systems, clean water underground aqueducts and electric cable tunnels.

  10. Investigation of methods to search for the boundaries on the image and their use on lung hardware of methods finding saliency map

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.

    2015-05-01

    This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.

  11. Analysis of Protein Kinetics Using Fluorescence Recovery After Photobleaching (FRAP).

    PubMed

    Giakoumakis, Nickolaos Nikiforos; Rapsomaniki, Maria Anna; Lygerou, Zoi

    2017-01-01

    Fluorescence recovery after photobleaching (FRAP) is a cutting-edge live-cell functional imaging technique that enables the exploration of protein dynamics in individual cells and thus permits the elucidation of protein mobility, function, and interactions at a single-cell level. During a typical FRAP experiment, fluorescent molecules in a defined region of interest within the cell are bleached by a short and powerful laser pulse, while the recovery of the fluorescence in the region is monitored over time by time-lapse microscopy. FRAP experimental setup and image acquisition involve a number of steps that need to be carefully executed to avoid technical artifacts. Equally important is the subsequent computational analysis of FRAP raw data, to derive quantitative information on protein diffusion and binding parameters. Here we present an integrated in vivo and in silico protocol for the analysis of protein kinetics using FRAP. We focus on the most commonly encountered challenges and technical or computational pitfalls and their troubleshooting so that valid and robust insight into protein dynamics within living cells is gained.

  12. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  13. Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Pirzadeh, S.

    1999-01-01

    A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.

  14. Computational and experimental analysis of DNA shuffling

    PubMed Central

    Maheshri, Narendra; Schaffer, David V.

    2003-01-01

    We describe a computational model of DNA shuffling based on the thermodynamics and kinetics of this process. The model independently tracks a representative ensemble of DNA molecules and records their states at every stage of a shuffling reaction. These data can subsequently be analyzed to yield information on any relevant metric, including reassembly efficiency, crossover number, type and distribution, and DNA sequence length distributions. The predictive ability of the model was validated by comparison to three independent sets of experimental data, and analysis of the simulation results led to several unique insights into the DNA shuffling process. We examine a tradeoff between crossover frequency and reassembly efficiency and illustrate the effects of experimental parameters on this relationship. Furthermore, we discuss conditions that promote the formation of useless “junk” DNA sequences or multimeric sequences containing multiple copies of the reassembled product. This model will therefore aid in the design of optimal shuffling reaction conditions. PMID:12626764

  15. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  16. A Methodology and Analysis for Cost-Effective Training in the AN/TSQ-73 Missile Minder

    DTIC Science & Technology

    1978-02-01

    subsequent users must join the program in progress. 10. Languae Laboratory - Audio, Active - Compare Mode - Al audio presenta- tional device that distributes...initial performanceof the Fystem, change inputs to or elements within the system and 4note changes in the performance of the system. 33. Teaching...Any contest, governed by rules, between teams or individuals, where the contest is a dynamic model of some real system, and a computer is used in

  17. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  18. A novel model for DNA sequence similarity analysis based on graph theory.

    PubMed

    Qi, Xingqin; Wu, Qin; Zhang, Yusen; Fuller, Eddie; Zhang, Cun-Quan

    2011-01-01

    Determination of sequence similarity is one of the major steps in computational phylogenetic studies. As we know, during evolutionary history, not only DNA mutations for individual nucleotide but also subsequent rearrangements occurred. It has been one of major tasks of computational biologists to develop novel mathematical descriptors for similarity analysis such that various mutation phenomena information would be involved simultaneously. In this paper, different from traditional methods (eg, nucleotide frequency, geometric representations) as bases for construction of mathematical descriptors, we construct novel mathematical descriptors based on graph theory. In particular, for each DNA sequence, we will set up a weighted directed graph. The adjacency matrix of the directed graph will be used to induce a representative vector for DNA sequence. This new approach measures similarity based on both ordering and frequency of nucleotides so that much more information is involved. As an application, the method is tested on a set of 0.9-kb mtDNA sequences of twelve different primate species. All output phylogenetic trees with various distance estimations have the same topology, and are generally consistent with the reported results from early studies, which proves the new method's efficiency; we also test the new method on a simulated data set, which shows our new method performs better than traditional global alignment method when subsequent rearrangements happen frequently during evolutionary history.

  19. Computer-based video analysis identifies infants with absence of fidgety movements.

    PubMed

    Støen, Ragnhild; Songstad, Nils Thomas; Silberg, Inger Elisabeth; Fjørtoft, Toril; Jensenius, Alexander Refsum; Adde, Lars

    2017-10-01

    BackgroundAbsence of fidgety movements (FMs) at 3 months' corrected age is a strong predictor of cerebral palsy (CP) in high-risk infants. This study evaluates the association between computer-based video analysis and the temporal organization of FMs assessed with the General Movement Assessment (GMA).MethodsInfants were eligible for this prospective cohort study if referred to a high-risk follow-up program in a participating hospital. Video recordings taken at 10-15 weeks post term age were used for GMA and computer-based analysis. The variation of the spatial center of motion, derived from differences between subsequent video frames, was used for quantitative analysis.ResultsOf 241 recordings from 150 infants, 48 (24.1%) were classified with absence of FMs or sporadic FMs using the GMA. The variation of the spatial center of motion (C SD ) during a recording was significantly lower in infants with normal (0.320; 95% confidence interval (CI) 0.309, 0.330) vs. absence of or sporadic (0.380; 95% CI 0.361, 0.398) FMs (P<0.001). A triage model with C SD thresholds chosen for sensitivity of 90% and specificity of 80% gave a 40% referral rate for GMA.ConclusionQuantitative video analysis during the FMs' period can be used to triage infants at high risk of CP to early intervention or observational GMA.

  20. Ecological Footprint Analysis (EFA) for the Chicago ...

    EPA Pesticide Factsheets

    Because of its computational simplicity, Ecological Footprint Analysis (EFA) has been extensively deployed for assessing the sustainability of various environmental systems. In general, EFA aims at capturing the impacts of human activity on the environment by computing the amount of bioproductive land that can support population consumption and the concomitant generation of waste in any given area. Herein, we deploy EFA for assessing the sustainability of an urban system, specifically, the Chicago Metropolitan Area (CMA). We estimate the trend in EF for the CMA between 1990 and 2015 to determine if the metropolitan area is moving towards or away from sustainable development. At the outset of the estimation, we consider six categories of bioproductive land for the analysis, namely, energy, arable, forest, pasture, and built-up lands as well as lake area. In addition, we allocate the various items consumed and/or produced by the area’s population to one of these categories. Subsequently, we computed the CMA’s ecological demand, or footprint, by quantifying the amount per capita of each land/space category required to sustain the consumption of the area’s population. Moreover, we determined the CMA’s ecological supply by accounting for the amount per capita of each land/space category that the area is providing to the environment. Finally, the ecological balance is computed by subtracting the area’s footprint from the corresponding ecological supply. We e

  1. Hand-held computer operating system program for collection of resident experience data.

    PubMed

    Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J

    2000-11-01

    To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.

  2. An infectious way to teach students about outbreaks.

    PubMed

    Cremin, Íde; Watson, Oliver; Heffernan, Alastair; Imai, Natsuko; Ahmed, Norin; Bivegete, Sandra; Kimani, Teresia; Kyriacou, Demetris; Mahadevan, Preveina; Mustafa, Rima; Pagoni, Panagiota; Sophiea, Marisa; Whittaker, Charlie; Beacroft, Leo; Riley, Steven; Fisher, Matthew C

    2018-06-01

    The study of infectious disease outbreaks is required to train today's epidemiologists. A typical way to introduce and explain key epidemiological concepts is through the analysis of a historical outbreak. There are, however, few training options that explicitly utilise real-time simulated stochastic outbreaks where the participants themselves comprise the dataset they subsequently analyse. In this paper, we present a teaching exercise in which an infectious disease outbreak is simulated over a five-day period and subsequently analysed. We iteratively developed the teaching exercise to offer additional insight into analysing an outbreak. An R package for visualisation, analysis and simulation of the outbreak data was developed to accompany the practical to reinforce learning outcomes. Computer simulations of the outbreak revealed deviations from observed dynamics, highlighting how simplifying assumptions conventionally made in mathematical models often differ from reality. Here we provide a pedagogical tool for others to use and adapt in their own settings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Natural History of Ground-Glass Lesions Among Patients With Previous Lung Cancer.

    PubMed

    Shewale, Jitesh B; Nelson, David B; Rice, David C; Sepesi, Boris; Hofstetter, Wayne L; Mehran, Reza J; Vaporciyan, Ara A; Walsh, Garrett L; Swisher, Stephen G; Roth, Jack A; Antonoff, Mara B

    2018-06-01

    Among patients with previous lung cancer, the malignant potential of subsequent ground-glass opacities (GGOs) on computed tomography remains unknown, with a lack of consensus regarding surveillance and intervention. This study sought to describe the natural history of GGO in patients with a history of lung cancer. A retrospective review was performed of 210 patients with a history of lung cancer and ensuing computed tomography evidence of pure or mixed GGOs between 2007 and 2013. Computed tomography reports were reviewed to determine the fate of the GGOs, by classifying all lesions as stable, resolved, or progressive over the course of the study. Multivariable analysis was performed to identify predictors of GGO progression and resolution. The mean follow-up time was 13 months. During this period, 55 (26%) patients' GGOs were stable, 131 (62%) resolved, and 24 (11%) progressed. Of the 24 GGOs that progressed, three were subsequently diagnosed as adenocarcinoma. Patients of black race (odds ratio [OR], 0.26) and other races besides white (OR, 0.89) had smaller odds of GGO resolution (p = 0.033), whereas patients with previous lung squamous cell carcinoma (OR, 5.16) or small cell carcinoma (OR, 5.36) were more likely to experience GGO resolution (p < 0.001). On multivariable analysis, only a history of adenocarcinoma was an independent predictor of GGO progression (OR, 6.9; p = 0.011). Among patients with a history of lung cancer, prior adenocarcinoma emerged as a predictor of GGO progression, whereas a history of squamous cell carcinoma or small cell carcinoma and white race were identified as predictors of GGO resolution. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Altered swelling and ion fluxes in articular cartilage as a biomarker in osteoarthritis and joint immobilization: a computational analysis

    PubMed Central

    Manzano, Sara; Manzano, Raquel; Doblaré, Manuel; Doweidar, Mohamed Hamdy

    2015-01-01

    In healthy cartilage, mechano-electrochemical phenomena act together to maintain tissue homeostasis. Osteoarthritis (OA) and degenerative diseases disrupt this biological equilibrium by causing structural deterioration and subsequent dysfunction of the tissue. Swelling and ion flux alteration as well as abnormal ion distribution are proposed as primary indicators of tissue degradation. In this paper, we present an extension of a previous three-dimensional computational model of the cartilage behaviour developed by the authors to simulate the contribution of the main tissue components in its behaviour. The model considers the mechano-electrochemical events as concurrent phenomena in a three-dimensional environment. This model has been extended here to include the effect of repulsion of negative charges attached to proteoglycans. Moreover, we have studied the fluctuation of these charges owning to proteoglycan variations in healthy and pathological articular cartilage. In this sense, standard patterns of healthy and degraded tissue behaviour can be obtained which could be a helpful diagnostic tool. By introducing measured properties of unhealthy cartilage into the computational model, the severity of tissue degeneration can be predicted avoiding complex tissue extraction and subsequent in vitro analysis. In this work, the model has been applied to monitor and analyse cartilage behaviour at different stages of OA and in both short (four, six and eight weeks) and long-term (11 weeks) fully immobilized joints. Simulation results showed marked differences in the corresponding swelling phenomena, in outgoing cation fluxes and in cation distributions. Furthermore, long-term immobilized patients display similar swelling as well as fluxes and distribution of cations to patients in the early stages of OA, thus, preventive treatments are highly recommended to avoid tissue deterioration. PMID:25392400

  5. Synthesis and characterization of a helicene-based imidazolium salt and its application in organic molecular electronics.

    PubMed

    Storch, Jan; Zadny, Jaroslav; Strasak, Tomas; Kubala, Martin; Sykora, Jan; Dusek, Michal; Cirkva, Vladimir; Matejka, Pavel; Krbal, Milos; Vacek, Jan

    2015-02-02

    Herein we demonstrate the synthesis of a helicene-based imidazolium salt. The salt was prepared by starting from racemic 2-methyl[6]helicene, which undergoes radical bromination to yield 2-(bromomethyl)[6]helicene. Subsequent treatment with 1-butylimidazole leads to the corresponding salt 1-butyl-3-(2-methyl[6]helicenyl)-imidazolium bromide. The prepared salt was subsequently characterized by using NMR spectroscopy and X-ray analysis, various optical spectrometric techniques, and computational chemistry tools. Finally, the imidazolium salt was immobilized onto a SiO2 substrate as a crystalline or amorphous deposit. The deposited layers were used for the development of organic molecular semiconductor devices and the construction of a fully reversible humidity sensor. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. The Role of Remote Sensing in Assessing Forest Biomass in Appalachian South Carolina

    NASA Technical Reports Server (NTRS)

    Shain, W.; Nix, L.

    1982-01-01

    Information is presented on the use of color infrared aerial photographs and ground sampling methods to quantify standing forest biomass in Appalachian South Carolina. Local tree biomass equations are given and subsequent evaluation of stand density and size classes using remote sensing methods is presented. Methods of terrain analysis, environmental hazard rating, and subsequent determination of accessibility of forest biomass are discussed. Computer-based statistical analyses are used to expand individual cover-type specific ground sample data to area-wide cover type inventory figures based on aerial photographic interpretation and area measurement. Forest biomass data are presented for the study area in terms of discriminant size classes, merchantability limits, accessibility (as related to terrain and yield/harvest constraints), and potential environmental impact of harvest.

  7. Redefining diagnostic symptoms of depression using Rasch analysis: testing an item bank suitable for DSM-V and computer adaptive testing.

    PubMed

    Mitchell, Alex J; Smith, Adam B; Al-salihy, Zerak; Rahim, Twana A; Mahmud, Mahmud Q; Muhyaldin, Asma S

    2011-10-01

    We aimed to redefine the optimal self-report symptoms of depression suitable for creation of an item bank that could be used in computer adaptive testing or to develop a simplified screening tool for DSM-V. Four hundred subjects (200 patients with primary depression and 200 non-depressed subjects), living in Iraqi Kurdistan were interviewed. The Mini International Neuropsychiatric Interview (MINI) was used to define the presence of major depression (DSM-IV criteria). We examined symptoms of depression using four well-known scales delivered in Kurdish. The Partial Credit Model was applied to each instrument. Common-item equating was subsequently used to create an item bank and differential item functioning (DIF) explored for known subgroups. A symptom level Rasch analysis reduced the original 45 items to 24 items of the original after the exclusion of 21 misfitting items. A further six items (CESD13 and CESD17, HADS-D4, HADS-D5 and HADS-D7, and CDSS3 and CDSS4) were removed due to misfit as the items were added together to form the item bank, and two items were subsequently removed following the DIF analysis by diagnosis (CESD20 and CDSS9, both of which were harder to endorse for women). Therefore the remaining optimal item bank consisted of 17 items and produced an area under the curve (AUC) of 0.987. Using a bank restricted to the optimal nine items revealed only minor loss of accuracy (AUC = 0.989, sensitivity 96%, specificity 95%). Finally, when restricted to only four items accuracy was still high (AUC was still 0.976; sensitivity 93%, specificity 96%). An item bank of 17 items may be useful in computer adaptive testing and nine or even four items may be used to develop a simplified screening tool for DSM-V major depressive disorder (MDD). Further examination of this item bank should be conducted in different cultural settings.

  8. Triclosan Computational Conformational Chemistry Analysis for Antimicrobial Properties in Polymers.

    PubMed

    Petersen, Richard C

    2015-03-01

    Triclosan is a diphenyl ether antimicrobial that has been analyzed by computational conformational chemistry for an understanding of Mechanomolecular Theory. Subsequent energy profile analysis combined with easily seen three-dimensional chemistry structure models for the nonpolar molecule Triclosan show how single bond rotations can alternate rapidly at a polar and nonpolar interface. Bond rotations for the center ether oxygen atom of the two aromatic rings then expose or hide nonbonding lone-pair electrons for the oxygen atom depending on the polar nature of the immediate local molecular environment. Rapid bond movements can subsequently produce fluctuations as vibration energy. Consequently, related mechanical molecular movements calculated as energy relationships by forces acting through different bond positions can help improve on current Mechanomolecular Theory. A previous controversy reported as a discrepancy in literature contends for a possible bacterial resistance from Triclosan antimicrobial. However, findings in clinical settings have not reported a single case for Triclosan bacterial resistance in over 40 years that has been documented carefully in government reports. As a result, Triclosan is recommended whenever there is a health benefit consistent with a number of approvals for use of Triclosan in healthcare devices. Since Triclosan is the most researched antimicrobial ever, literature meta analysis with computational chemistry can best describe new molecular conditions that were previously impossible by conventional chemistry methods. Triclosan vibrational energy can now explain the molecular disruption of bacterial membranes. Further, Triclosan mechanomolecular movements help illustrate use in polymer matrix composites as an antimicrobial with two new additive properties as a toughening agent to improve matrix fracture toughness from microcracking and a hydrophobic wetting agent to help incorporate strengthening fibers. Interrelated Mechanomolecular Theory by oxygen atom bond rotations or a nitrogen-type pyramidal inversion can be shown to produce energy at a polar and nonpolar boundary condition to better make clear membrane transport of other molecules, cell recognition/signaling/defense and enzyme molecular "mixing" action.

  9. Development of a Compact Eleven Feed Cryostat for the Patriot 12-m Antenna System

    NASA Technical Reports Server (NTRS)

    Beaudoin, Christopher; Kildal, Per-Simon; Yang, Jian; Pantaleev, Miroslav

    2010-01-01

    The Eleven antenna has constant beam width, constant phase center location, and low spillover over a decade bandwidth. Therefore, it can feed a reflector for high aperture efficiency (also called feed efficiency). It is equally important that the feed efficiency and its subefficiencies not be degraded significantly by installing the feed in a cryostat. The MIT Haystack Observatory, with guidance from Onsala Space Observatory and Chalmers University, has been working to integrate the Eleven antenna into a compact cryostat suitable for the Patriot 12-m antenna. Since the analysis of the feed efficiencies in this presentation is purely computational, we first demonstrate the validity of the computed results by comparing them to measurements. Subsequently, we analyze the dependence of the cryostat size on the feed efficiencies, and, lastly, the Patriot 12-m subreflector is incorporated into the computational model to assess the overall broadband efficiency of the antenna system.

  10. Non-numeric computation for high eccentricity orbits. [Earth satellite orbit perturbation

    NASA Technical Reports Server (NTRS)

    Sridharan, R.; Renard, M. L.

    1975-01-01

    Geocentric orbits of large eccentricity (e = 0.9 to 0.95) are significantly perturbed in cislunar space by the sun and moon. The time-history of the height of perigee, subsequent to launch, is particularly critical. The determination of 'launch windows' is mostly concerned with preventing the height of perigee from falling below its low initial value before the mission lifetime has elapsed. Between the extremes of high accuracy digital integration of the equations of motion and of using an approximate, but very fast, stability criteria method, this paper is concerned with the developement of a method of intermediate complexity using non-numeric computation. The computer is used as the theory generator to generalize Lidov's theory using six osculating elements. Symbolic integration is completely automatized and the output is a set of condensed formulae well suited for repeated applications in launch window analysis. Examples of applications are given.

  11. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads.

  12. Theoretical Modeling of Molecular and Electron Kinetic Processes. Volume I. Theoretical Formulation of Analysis and Description of Computer Program.

    DTIC Science & Technology

    1979-01-01

    syn- thesis proceed s by ignoring unacceptable syntax or other errors , pro- tection against subsequent execution of a faulty reaction scheme can be...resulting TAPE9 . During subroutine syn thesis and reaction processing, a search is made (fo r each secondary electron collision encountered) to...program library, which can be cat- alogued and saved if any future specialized modifications (beyond the scope of the syn thesis capability of LASER

  13. A Combined Multi-Material Euler/LaGrange Computational Analysis of Blast Loading Resulting from Detonation of Buried Landmines

    DTIC Science & Technology

    2008-01-01

    phenomena. The work of Bergeron et al. [7] was subsequently extended by Braid [8] to incorporate different charge sizes, soil types and improved...place, a series of water hoses is placed in pit bottom to allow the introduction of water into the pit from the bottom. Next, approximately 14.2m3 of... Braid , 17th International MABS Symposium, Las Vegas, USA, June 2002. [8]. M. P. Braid , Defence R&D Canada, Suffield Special Publication, DRES SSSP

  14. Stochastic Models for Closed Boundary Analysis: Part I. Representation and Reconstruction.

    DTIC Science & Technology

    1980-07-01

    discussed. In a subsequent paper we will consider the classification problem. C> * School of Electrical Engineering, Purdue University, West Lafayette, IN...1972. 2. T. S. Huang, "Coding of Two Tone Images," TR EE 77-10, School of Elec. Engr., Purdue University, W. Lafeyette, IN 47907. 3. A. Oosterlink, A...Jan. 1977. 5. A. Ambler et al., "A Versatile computer controlled assembly system," Third Intl. Conf. on Art . Intel., 1973, pp. 298-303. 6. C. Rosen

  15. Collection of X-ray diffraction data from macromolecular crystals

    PubMed Central

    Dauter, Zbigniew

    2017-01-01

    Diffraction data acquisition is the final experimental stage of the crystal structure analysis. All subsequent steps involve mainly computer calculations. Optimally measured and accurate data make the structure solution and refinement easier and lead to more faithful interpretation of the final models. Here, the important factors in data collection from macromolecular crystals are discussed and strategies appropriate for various applications, such as molecular replacement, anomalous phasing, atomic-resolution refinement etc., are presented. Criteria useful for judging the diffraction data quality are also discussed. PMID:28573573

  16. Finite element modelling of non-linear magnetic circuits using Cosmic NASTRAN

    NASA Technical Reports Server (NTRS)

    Sheerer, T. J.

    1986-01-01

    The general purpose Finite Element Program COSMIC NASTRAN currently has the ability to model magnetic circuits with constant permeablilities. An approach was developed which, through small modifications to the program, allows modelling of non-linear magnetic devices including soft magnetic materials, permanent magnets and coils. Use of the NASTRAN code resulted in output which can be used for subsequent mechanical analysis using a variation of the same computer model. Test problems were found to produce theoretically verifiable results.

  17. Automated detection of prostate cancer in digitized whole-slide images of H and E-stained biopsy specimens

    NASA Astrophysics Data System (ADS)

    Litjens, G.; Ehteshami Bejnordi, B.; Timofeeva, N.; Swadi, G.; Kovacs, I.; Hulsbergen-van de Kaa, C.; van der Laak, J.

    2015-03-01

    Automated detection of prostate cancer in digitized H and E whole-slide images is an important first step for computer-driven grading. Most automated grading algorithms work on preselected image patches as they are too computationally expensive to calculate on the multi-gigapixel whole-slide images. An automated multi-resolution cancer detection system could reduce the computational workload for subsequent grading and quantification in two ways: by excluding areas of definitely normal tissue within a single specimen or by excluding entire specimens which do not contain any cancer. In this work we present a multi-resolution cancer detection algorithm geared towards the latter. The algorithm methodology is as follows: at a coarse resolution the system uses superpixels, color histograms and local binary patterns in combination with a random forest classifier to assess the likelihood of cancer. The five most suspicious superpixels are identified and at a higher resolution more computationally expensive graph and gland features are added to refine classification for these superpixels. Our methods were evaluated in a data set of 204 digitized whole-slide H and E stained images of MR-guided biopsy specimens from 163 patients. A pathologist exhaustively annotated the specimens for areas containing cancer. The performance of our system was evaluated using ten-fold cross-validation, stratified according to patient. Image-based receiver operating characteristic (ROC) analysis was subsequently performed where a specimen containing cancer was considered positive and specimens without cancer negative. We obtained an area under the ROC curve of 0.96 and a 0.4 specificity at a 1.0 sensitivity.

  18. Numerical Analysis of a Pulse Detonation Cross Flow Heat Load Experiment

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Naples, Andrew .; Hoke, John L.; Schauer, Fred

    2011-01-01

    A comparison between experimentally measured and numerically simulated, time-averaged, point heat transfer rates in a pulse detonation (PDE) engine is presented. The comparison includes measurements and calculations for heat transfer to a cylinder in crossflow and to the tube wall itself using a novel spool design. Measurements are obtained at several locations and under several operating conditions. The measured and computed results are shown to be in substantial agreement, thereby validating the modeling approach. The model, which is based in computational fluid dynamics (CFD) is then used to interpret the results. A preheating of the incoming fuel charge is predicted, which results in increased volumetric flow and subsequent overfilling. The effect is validated with additional measurements.

  19. A Simple and Accurate Analysis of Conductivity Loss in Millimeter-Wave Helical Slow-Wave Structures

    NASA Astrophysics Data System (ADS)

    Datta, S. K.; Kumar, Lalit; Basu, B. N.

    2009-04-01

    Electromagnetic field analysis of a helix slow-wave structure was carried out and a closed form expression was derived for the inductance per unit length of the transmission-line equivalent circuit of the structure, taking into account the actual helix tape dimensions and surface current on the helix over the actual metallic area of the tape. The expression of the inductance per unit length, thus obtained, was used for estimating the increment in the inductance per unit length caused due to penetration of the magnetic flux into the conducting surfaces following Wheeler’s incremental inductance rule, which was subsequently interpreted for the attenuation constant of the propagating structure. The analysis was computationally simple and accurate, and accrues the accuracy of 3D electromagnetic analysis by allowing the use of dispersion characteristics obtainable from any standard electromagnetic modeling. The approach was benchmarked against measurement for two practical structures, and excellent agreement was observed. The analysis was subsequently applied to demonstrate the effects of conductivity on the attenuation constant of a typical broadband millimeter-wave helical slow-wave structure with respect to helix materials and copper plating on the helix, surface finish of the helix, dielectric loading effect and effect of high temperature operation - a comparative study of various such aspects are covered.

  20. Virtual pools for interactive analysis and software development through an integrated Cloud environment

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.

    2011-12-01

    WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.

  1. Future trends in computer waste generation in India.

    PubMed

    Dwivedy, Maheshwar; Mittal, R K

    2010-11-01

    The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. A method for the measurement and analysis of ride vibrations of transportation systems

    NASA Technical Reports Server (NTRS)

    Catherines, J. J.; Clevenson, S. A.; Scholl, H. F.

    1972-01-01

    The measurement and recording of ride vibrations which affect passenger comfort in transportation systems and the subsequent data-reduction methods necessary for interpreting the data present exceptional instrumentation requirements and necessitate the use of computers for specialized analysis techniques. A method is presented for both measuring and analyzing ride vibrations of the type encountered in ground and air transportation systems. A portable system for measuring and recording low-frequency, low-amplitude accelerations and specialized data-reduction procedures are described. Sample vibration measurements in the form of statistical parameters representative of typical transportation systems are also presented to demonstrate the utility of the techniques.

  3. Optimization of binary thermodynamic and phase diagram data

    NASA Astrophysics Data System (ADS)

    Bale, Christopher W.; Pelton, A. D.

    1983-03-01

    An optimization technique based upon least squares regression is presented to permit the simultaneous analysis of diverse experimental binary thermodynamic and phase diagram data. Coefficients of polynomial expansions for the enthalpy and excess entropy of binary solutions are obtained which can subsequently be used to calculate the thermodynamic properties or the phase diagram. In an interactive computer-assisted analysis employing this technique, one can critically analyze a large number of diverse data in a binary system rapidly, in a manner which is fully self-consistent thermodynamically. Examples of applications to the Bi-Zn, Cd-Pb, PbCl2-KCl, LiCl-FeCl2, and Au-Ni binary systems are given.

  4. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  5. Lattice algebra approach to multispectral analysis of ancient documents.

    PubMed

    Valdiviezo-N, Juan C; Urcid, Gonzalo

    2013-02-01

    This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.

  6. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.

  7. 3D finite element modelling of sheet metal blanking process

    NASA Astrophysics Data System (ADS)

    Bohdal, Lukasz; Kukielka, Leon; Chodor, Jaroslaw; Kulakowska, Agnieszka; Patyk, Radoslaw; Kaldunski, Pawel

    2018-05-01

    The shearing process such as the blanking of sheet metals has been used often to prepare workpieces for subsequent forming operations. The use of FEM simulation is increasing for investigation and optimizing the blanking process. In the current literature a blanking FEM simulations for the limited capability and large computational cost of the three dimensional (3D) analysis has been largely limited to two dimensional (2D) plane axis-symmetry problems. However, a significant progress in modelling which takes into account the influence of real material (e.g. microstructure of the material), physical and technological conditions can be obtained by using 3D numerical analysis methods in this area. The objective of this paper is to present 3D finite element analysis of the ductile fracture, strain distribution and stress in blanking process with the assumption geometrical and physical nonlinearities. The physical, mathematical and computer model of the process are elaborated. Dynamic effects, mechanical coupling, constitutive damage law and contact friction are taken into account. The application in ANSYS/LS-DYNA program is elaborated. The effect of the main process parameter a blanking clearance on the deformation of 1018 steel and quality of the blank's sheared edge is analyzed. The results of computer simulations can be used to forecasting quality of the final parts optimization.

  8. Information technology infusion model for health sector in a developing country: Nigeria as a case.

    PubMed

    Idowu, Bayo; Adagunodo, Rotimi; Adedoyin, Rufus

    2006-01-01

    To date, information technology (IT) has not been widely adopted in the health sector in the developing countries. Information Technology may bring an improvement on health care delivery systems. It is one of the prime movers of globalization. Information technology infusion is the degree to which a different information technology tools are integrated into organizational activities. This study aimed to know the degree and the extent of incorporation of Information Technology in the Nigerian health sector and derive an IT infusion models for popular IT indicators that are in use in Nigeria (Personal computers, Mobile phones, and the Internet) and subsequently investigates their impacts on the health care delivery system in Nigerian teaching hospitals. In this study, data were collected through the use of questionnaires. Also, oral interviews were conducted and subsequently, the data gathered were analyzed. The results of the analysis revealed that out of the three IT indicators considered, mobile phones are spreading fastest. It also revealed that computers and mobile phones are in use in all the teaching hospitals. Finally in this research, IT infusion models were developed for health sector in Nigeria from the data gathered through the questionnaire and oral interview.

  9. X-ray micro computed tomography for the visualization of an atherosclerotic human coronary artery

    NASA Astrophysics Data System (ADS)

    Matviykiv, Sofiya; Buscema, Marzia; Deyhle, Hans; Pfohl, Thomas; Zumbuehl, Andreas; Saxer, Till; Müller, Bert

    2017-06-01

    Atherosclerosis refers to narrowing or blocking of blood vessels that can lead to a heart attack, chest pain or stroke. Constricted segments of diseased arteries exhibit considerably increased wall shear stress, compared to the healthy ones. One of the possibilities to improve patient’s treatment is the application of nano-therapeutic approaches, based on shear stress sensitive nano-containers. In order to tailor the chemical composition and subsequent physical properties of such liposomes, one has to know precisely the morphology of critically stenosed arteries at micrometre resolution. It is often obtained by means of histology, which has the drawback of offering only two-dimensional information. Additionally, it requires the artery to be decalcified before sectioning, which might lead to deformations within the tissue. Micro computed tomography (μCT) enables the three-dimensional (3D) visualization of soft and hard tissues at micrometre level. μCT allows lumen segmentation that is crucial for subsequent flow simulation analysis. In this communication, tomographic images of a human coronary artery before and after decalcification are qualitatively and quantitatively compared. We analyse the cross section of the diseased human coronary artery before and after decalcification, and calculate the lumen area of both samples.

  10. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  11. Computer Networking with the Victorian Correspondence School.

    ERIC Educational Resources Information Center

    Conboy, Ian

    During 1985 the Education Department installed two-way radios in 44 remote secondary schools in Victoria, Australia, to improve turn-around time for correspondence assignments. Subsequently, teacher supervisors at Melbourne's Correspondence School sought ways to further augument audio interactivity with computer networking. Computer equipment was…

  12. Quantification of substrate and cellular strains in stretchable 3D cell cultures: an experimental and computational framework.

    PubMed

    González-Avalos, P; Mürnseer, M; Deeg, J; Bachmann, A; Spatz, J; Dooley, S; Eils, R; Gladilin, E

    2017-05-01

    The mechanical cell environment is a key regulator of biological processes . In living tissues, cells are embedded into the 3D extracellular matrix and permanently exposed to mechanical forces. Quantification of the cellular strain state in a 3D matrix is therefore the first step towards understanding how physical cues determine single cell and multicellular behaviour. The majority of cell assays are, however, based on 2D cell cultures that lack many essential features of the in vivo cellular environment. Furthermore, nondestructive measurement of substrate and cellular mechanics requires appropriate computational tools for microscopic image analysis and interpretation. Here, we present an experimental and computational framework for generation and quantification of the cellular strain state in 3D cell cultures using a combination of 3D substrate stretcher, multichannel microscopic imaging and computational image analysis. The 3D substrate stretcher enables deformation of living cells embedded in bead-labelled 3D collagen hydrogels. Local substrate and cell deformations are determined by tracking displacement of fluorescent beads with subsequent finite element interpolation of cell strains over a tetrahedral tessellation. In this feasibility study, we debate diverse aspects of deformable 3D culture construction, quantification and evaluation, and present an example of its application for quantitative analysis of a cellular model system based on primary mouse hepatocytes undergoing transforming growth factor (TGF-β) induced epithelial-to-mesenchymal transition. © 2017 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  13. Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem

    NASA Astrophysics Data System (ADS)

    Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.

    2015-12-01

    Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.

  14. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation.

    PubMed

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin

    2015-03-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  15. A Computer-Based System Integrating Instruction and Information Retrieval: A Description of Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Selig, Judith A.; And Others

    This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…

  16. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  17. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  18. Geomechanical Analysis of Underground Coal Gasification Reactor Cool Down for Subsequent CO2 Storage

    NASA Astrophysics Data System (ADS)

    Sarhosis, Vasilis; Yang, Dongmin; Kempka, Thomas; Sheng, Yong

    2013-04-01

    Underground coal gasification (UCG) is an efficient method for the conversion of conventionally unmineable coal resources into energy and feedstock. If the UCG process is combined with the subsequent storage of process CO2 in the former UCG reactors, a near-zero carbon emission energy source can be realised. This study aims to present the development of a computational model to simulate the cooling process of UCG reactors in abandonment to decrease the initial high temperature of more than 400 °C to a level where extensive CO2 volume expansion due to temperature changes can be significantly reduced during the time of CO2 injection. Furthermore, we predict the cool down temperature conditions with and without water flushing. A state of the art coupled thermal-mechanical model was developed using the finite element software ABAQUS to predict the cavity growth and the resulting surface subsidence. In addition, the multi-physics computational software COMSOL was employed to simulate the cavity cool down process which is of uttermost relevance for CO2 storage in the former UCG reactors. For that purpose, we simulated fluid flow, thermal conduction as well as thermal convection processes between fluid (water and CO2) and solid represented by coal and surrounding rocks. Material properties for rocks and coal were obtained from extant literature sources and geomechanical testings which were carried out on samples derived from a prospective demonstration site in Bulgaria. The analysis of results showed that the numerical models developed allowed for the determination of the UCG reactor growth, roof spalling, surface subsidence and heat propagation during the UCG process and the subsequent CO2 storage. It is anticipated that the results of this study can support optimisation of the preparation procedure for CO2 storage in former UCG reactors. The proposed scheme was discussed so far, but not validated by a coupled numerical analysis and if proved to be applicable it could provide a significant optimisation of the UCG process by means of CO2 storage efficiency. The proposed coupled UCG-CCS scheme allows for meeting EU targets for greenhouse gas emissions and increases the coal yield otherwise impossible to exploit.

  19. Function library programming to support B89 evaluation of Sheffield Apollo RS50 DCC (Direct Computer Control) CMM (Coordinate Measuring Machine)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, R.N.

    1990-02-28

    The Inspection Shop at Lawrence Livermore Lab recently purchased a Sheffield Apollo RS50 Direct Computer Control Coordinate Measuring Machine. The performance of the machine was specified to conform to B89 standard which relies heavily upon using the measuring machine in its intended manner to verify its accuracy (rather than parametric tests). Although it would be possible to use the interactive measurement system to perform these tasks, a more thorough and efficient job can be done by creating Function Library programs for certain tasks which integrate Hewlett-Packard Basic 5.0 language and calls to proprietary analysis and machine control routines. This combinationmore » provides efficient use of the measuring machine with a minimum of keyboard input plus an analysis of the data with respect to the B89 Standard rather than a CMM analysis which would require subsequent interpretation. This paper discusses some characteristics of the Sheffield machine control and analysis software and my use of H-P Basic language to create automated measurement programs to support the B89 performance evaluation of the CMM. 1 ref.« less

  20. Design optimization of hydraulic turbine draft tube based on CFD and DOE method

    NASA Astrophysics Data System (ADS)

    Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin

    2018-03-01

    In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.

  1. The COREL and W12SC3 computer programs for supersonic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Mason, W. H.; Rosen, B. S.

    1983-01-01

    Two computer codes useful in the supersonic aerodynamic design of wings, including the supersonic maneuver case are described. The nonlinear full potential equation COREL code performs an analysis of a spanwise section of the wing in the crossflow plane by assuming conical flow over the section. A subsequent approximate correction to the solution can be made in order to account for nonconical effects. In COREL, the flow-field is assumed to be irrotional (Mach numbers normal to shock waves less than about 1.3) and the full potential equation is solved to obtain detailed results for the leading edge expansion, supercritical crossflow, and any crossflow shockwaves. W12SC3 is a linear theory panel method which combines and extends elements of several of Woodward's codes, with emphasis on fighter applications. After a brief review of the aerodynamic theory used by each method, the use of the codes is illustrated with several examples, detailed input instructions and a sample case.

  2. Finite Element Analysis of Flexural Vibrations in Hard Disk Drive Spindle Systems

    NASA Astrophysics Data System (ADS)

    LIM, SEUNGCHUL

    2000-06-01

    This paper is concerned with the flexural vibration analysis of the hard disk drive (HDD) spindle system by means of the finite element method. In contrast to previous research, every system component is here analytically modelled taking into account its structural flexibility and also the centrifugal effect particularly on the disk. To prove the effectiveness and accuracy of the formulated models, commercial HDD systems with two and three identical disks are selected as examples. Then their major natural modes are computed with only a small number of element meshes as the shaft rotational speed is varied, and subsequently compared with the existing numerical results obtained using other methods and newly acquired experimental ones. Based on such a series of studies, the proposed method can be concluded as a very promising tool for the design of HDDs and various other high-performance computer disk drives such as floppy disk drives, CD ROM drives, and their variations having spindle mechanisms similar to those of HDDs.

  3. Imaging and Analysis of Void-defects in Solder Joints Formed in Reduced Gravity using High-Resolution Computed Tomography

    NASA Technical Reports Server (NTRS)

    Easton, John W.; Struk, Peter M.; Rotella, Anthony

    2008-01-01

    As a part of efforts to develop an electronics repair capability for long duration space missions, techniques and materials for soldering components on a circuit board in reduced gravity must be developed. This paper presents results from testing solder joint formation in low gravity on a NASA Reduced Gravity Research Aircraft. The results presented include joints formed using eutectic tin-lead solder and one of the following fluxes: (1) a no-clean flux core, (2) a rosin flux core, and (3) a solid solder wire with external liquid no-clean flux. The solder joints are analyzed with a computed tomography (CT) technique which imaged the interior of the entire solder joint. This replaced an earlier technique that required the solder joint to be destructively ground down revealing a single plane which was subsequently analyzed. The CT analysis technique is described and results presented with implications for future testing as well as implications for the overall electronics repair effort discussed.

  4. Computer-based analysis of microvascular alterations in a mouse model for Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Heinzer, Stefan; Müller, Ralph; Stampanoni, Marco; Abela, Rafael; Meyer, Eric P.; Ulmann-Schuler, Alexandra; Krucker, Thomas

    2007-03-01

    Vascular factors associated with Alzheimer's disease (AD) have recently gained increased attention. To investigate changes in vascular, particularly microvascular architecture, we developed a hierarchical imaging framework to obtain large-volume, high-resolution 3D images from brains of transgenic mice modeling AD. In this paper, we present imaging and data analysis methods which allow compiling unique characteristics from several hundred gigabytes of image data. Image acquisition is based on desktop micro-computed tomography (µCT) and local synchrotron-radiation µCT (SRµCT) scanning with a nominal voxel size of 16 µm and 1.4 µm, respectively. Two visualization approaches were implemented: stacks of Z-buffer projections for fast data browsing, and progressive-mesh based surface rendering for detailed 3D visualization of the large datasets. In a first step, image data was assessed visually via a Java client connected to a central database. Identified characteristics of interest were subsequently quantified using global morphometry software. To obtain even deeper insight into microvascular alterations, tree analysis software was developed providing local morphometric parameters such as number of vessel segments or vessel tortuosity. In the context of ever increasing image resolution and large datasets, computer-aided analysis has proven both powerful and indispensable. The hierarchical approach maintains the context of local phenomena, while proper visualization and morphometry provide the basis for detailed analysis of the pathology related to structure. Beyond analysis of microvascular changes in AD this framework will have significant impact considering that vascular changes are involved in other neurodegenerative diseases as well as in cancer, cardiovascular disease, asthma, and arthritis.

  5. Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.

    2001-01-01

    Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.

  6. Using the Computer in Evolution Studies

    ERIC Educational Resources Information Center

    Mariner, James L.

    1973-01-01

    Describes a high school biology exercise in which a computer greatly reduces time spent on calculations. Genetic equilibrium demonstrated by the Hardy-Weinberg principle and the subsequent effects of violating any of its premises are more readily understood when frequencies of alleles through many generations are calculated by the computer. (JR)

  7. Controller design via structural reduced modeling by FETM

    NASA Technical Reports Server (NTRS)

    Yousuff, A.

    1986-01-01

    The Finite Element - Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not directly produce reduced models for control design. To overcome these shortcomings, a modification of FETM method has been developed. The modified FETM method easily produces reduced models that are tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the procedures for output feedback, constrained compensation, and decentralized control. This semi annual report presents the development of the modified FETM, and through an example, illustrates its applicability to an output feedback and a decentralized control design.

  8. Manned systems utilization analysis (study 2.1). Volume 5: Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    The LOVES computer code developed to investigate the concept of space servicing operational satellites as an alternative to replacing expendable satellites or returning satellites to earth for ground refurbishment is presented. In addition to having the capability to simulate the expendable satellite operation and the ground refurbished satellite operation, the program is designed to simulate the logistics of space servicing satellites using an upper stage vehicle and/or the earth to orbit shuttle. The program not only provides for the initial deployment of the satellite but also simulates the random failure and subsequent replacement of various equipment modules comprising the satellite. The program has been used primarily to conduct trade studies and/or parametric studies of various space program operational philosophies.

  9. Extension of a nonlinear systems theory to general-frequency unsteady transonic aerodynamic responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1993-01-01

    A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.

  10. An evaluation of three methods of saying "no" to avoid an escalating response class hierarchy.

    PubMed

    Mace, F Charles; Pratt, Jamie L; Prager, Kevin L; Pritchard, Duncan

    2011-01-01

    We evaluated the effects of three different methods of denying access to requested high-preference activities on escalating problem behavior. Functional analysis and response class hierarchy (RCH) assessment results indicated that 4 topographies of problem behaviors displayed by a 13-year-old boy with high-functioning autism constituted an RCH maintained by positive (tangible) reinforcement. Identification of the RCH comprised the baseline phase, during which computer access was denied by saying "no" and providing an explanation for the restriction. Two alternative methods of saying "no" were then evaluated. These methods included (a) denying computer access while providing an opportunity to engage in an alternative preferred activity and (b) denying immediate computer access by arranging a contingency between completion of a low-preference task and subsequent computer access. Results indicated that a hierarchy of problem behavior may be identified in the context of denying access to a preferred activity and that it may be possible to prevent occurrences of escalating problem behavior by either presenting alternative options or arranging contingencies when saying "no" to a child's requests.

  11. On the precision of aero-thermal simulations for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Thompson, Hugh

    2016-08-01

    Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.

  12. Brain-computer interfaces in neurological rehabilitation.

    PubMed

    Daly, Janis J; Wolpaw, Jonathan R

    2008-11-01

    Recent advances in analysis of brain signals, training patients to control these signals, and improved computing capabilities have enabled people with severe motor disabilities to use their brain signals for communication and control of objects in their environment, thereby bypassing their impaired neuromuscular system. Non-invasive, electroencephalogram (EEG)-based brain-computer interface (BCI) technologies can be used to control a computer cursor or a limb orthosis, for word processing and accessing the internet, and for other functions such as environmental control or entertainment. By re-establishing some independence, BCI technologies can substantially improve the lives of people with devastating neurological disorders such as advanced amyotrophic lateral sclerosis. BCI technology might also restore more effective motor control to people after stroke or other traumatic brain disorders by helping to guide activity-dependent brain plasticity by use of EEG brain signals to indicate to the patient the current state of brain activity and to enable the user to subsequently lower abnormal activity. Alternatively, by use of brain signals to supplement impaired muscle control, BCIs might increase the efficacy of a rehabilitation protocol and thus improve muscle control for the patient.

  13. De novo self-assembling collagen heterotrimers using explicit positive and negative design.

    PubMed

    Xu, Fei; Zhang, Lei; Koder, Ronald L; Nanda, Vikas

    2010-03-23

    We sought to computationally design model collagen peptides that specifically associate as heterotrimers. Computational design has been successfully applied to the creation of new protein folds and functions. Despite the high abundance of collagen and its key role in numerous biological processes, fibrous proteins have received little attention as computational design targets. Collagens are composed of three polypeptide chains that wind into triple helices. We developed a discrete computational model to design heterotrimer-forming collagen-like peptides. Stability and specificity of oligomerization were concurrently targeted using a combined positive and negative design approach. The sequences of three 30-residue peptides, A, B, and C, were optimized to favor charge-pair interactions in an ABC heterotrimer, while disfavoring the 26 competing oligomers (i.e., AAA, ABB, BCA). Peptides were synthesized and characterized for thermal stability and triple-helical structure by circular dichroism and NMR. A unique A:B:C-type species was not achieved. Negative design was partially successful, with only A + B and B + C competing mixtures formed. Analysis of computed versus experimental stabilities helps to clarify the role of electrostatics and secondary-structure propensities determining collagen stability and to provide important insight into how subsequent designs can be improved.

  14. Biomarkers in Computational Toxicology

    EPA Science Inventory

    Biomarkers are a means to evaluate chemical exposure and/or the subsequent impacts on toxicity pathways that lead to adverse health outcomes. Computational toxicology can integrate biomarker data with knowledge of exposure, chemistry, biology, pharmacokinetics, toxicology, and e...

  15. Application of Semantic Tagging to Generate Superimposed Information on a Digital Encyclopedia

    NASA Astrophysics Data System (ADS)

    Garrido, Piedad; Tramullas, Jesus; Martinez, Francisco J.

    We can find in the literature several works regarding the automatic or semi-automatic processing of textual documents with historic information using free software technologies. However, more research work is needed to integrate the analysis of the context and provide coverage to the peculiarities of the Spanish language from a semantic point of view. This research work proposes a novel knowledge-based strategy based on combining subject-centric computing, a topic-oriented approach, and superimposed information. It subsequent combination with artificial intelligence techniques led to an automatic analysis after implementing a made-to-measure interpreted algorithm which, in turn, produced a good number of associations and events with 90% reliability.

  16. The Analysis of Fluorescence Decay by a Method of Moments

    PubMed Central

    Isenberg, Irvin; Dyson, Robert D.

    1969-01-01

    The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139

  17. Emerging role of multi-detector computed tomography in the diagnosis of hematuria following percutaneous nephrolithotomy: A case scenario.

    PubMed

    Sivanandam, S E; Mathew, Georgie; Bhat, Sanjay H

    2009-07-01

    Persistent hematuria is one of the most dreaded complications following percutanous nephrolithotomy (PCNL). Although invasive, a catheter-based angiogram is usually used to localize the bleeding vessel and subsequently embolize it. Advances in imaging technology have now made it possible to use a non invasive multi-detector computed tomography (MDCT) angiogram with 3-D reconstruction to establish the diagnosis. We report a case of post-PCNL hemorrhage due to a pseudo aneurysm that was missed by a conventional angiogram and subsequently detected on MDCT angiogram.

  18. Satellite applications to a coastal inlet study, Clearwater Beach, Florida

    NASA Technical Reports Server (NTRS)

    Wang, Y. H.; Smutz, M.; Ruth, B. E.; Brooks, H. K.

    1977-01-01

    Two sets of LANDSAT magnetic tapes were obtained and displayed on the screen of an IMAGE 100 computer. Spectral analysis was performed to produce various signatures, their extent and location. Subsequent ground truth observations and measurements were gathered by means of hydrographic surveys and low-altitude aerial photography for interpretation and calibration of the LANDSAT data. Finally, a coastal engineering assessment based on the LANDSAT data was made. Recommendations to the City of Clearwater regarding the navigational channel alignment and dredging practice are presented in the light of the inlet stability.

  19. New atmospheric sensor analysis study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1989-01-01

    The functional capabilities of the ESAD Research Computing Facility are discussed. The system is used in processing atmospheric measurements which are used in the evaluation of sensor performance, conducting design-concept simulation studies, and also in modeling the physical and dynamical nature of atmospheric processes. The results may then be evaluated to furnish inputs into the final design specifications for new space sensors intended for future Spacelab, Space Station, and free-flying missions. In addition, data gathered from these missions may subsequently be analyzed to provide better understanding of requirements for numerical modeling of atmospheric phenomena.

  20. Design of three-dimensional scramjet inlets for hypersonic propulsion

    NASA Technical Reports Server (NTRS)

    Simmons, J. M.; Weidner, E. H.

    1986-01-01

    The paper outlines an approach to the design of three-dimensional inlets for scramjet engines. The basis of the techniques used is the method of streamline tracing through an inviscid axisymmetric flow field. A technique is described for making a smooth change of cross-section shape from rectangular to circular. A feature is the considerable use of computer-graphics to provide a 'user-oriented' procedure which can produce promising design configurations for subsequent analysis with CFD codes. An example is given to demonstrate the capabilities of the design techniques.

  1. Proceedings of a Workshop on Calibration and Application of Hydrologic Models Held in Gulf Shores, Alabama on October 18-20, 1988

    DTIC Science & Technology

    1988-12-01

    of a frequency analysis, some form of hydraulic model should be used to verify the conveyance capacity of the floodway. Negative skewness of the...minimum of 40 years of record was used to compute the 100-year to 2-year ratios which were subsequently used to develop the isopluvial maps. A partial ...applying the 100-year TP-40 precipitation adjusted to reproduce the expected probability and partial duration aajustments to the HEC-1 model. unce the

  2. Ecology and exploration of the rare biosphere.

    PubMed

    Lynch, Michael D J; Neufeld, Josh D

    2015-04-01

    The profound influence of microorganisms on human life and global biogeochemical cycles underlines the value of studying the biogeography of microorganisms, exploring microbial genomes and expanding our understanding of most microbial species on Earth: that is, those present at low relative abundance. The detection and subsequent analysis of low-abundance microbial populations—the 'rare biosphere'—have demonstrated the persistence, population dynamics, dispersion and predation of these microbial species. We discuss the ecology of rare microbial populations, and highlight molecular and computational methods for targeting taxonomic 'blind spots' within the rare biosphere of complex microbial communities.

  3. Processing communications events in parallel active messaging interface by awakening thread from wait state

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  4. A subsequent closed-form description of propagated signaling phenomena in the membrane of an axon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melendy, Robert F., E-mail: rfmelendy@liberty.edu

    2016-05-15

    I recently introduced a closed-form description of propagated signaling phenomena in the membrane of an axon [R.F. Melendy, Journal of Applied Physics 118, 244701 (2015)]. Those results demonstrate how intracellular conductance, the thermodynamics of magnetization, and current modulation, function together in generating an action potential in a unified, closed-form description. At present, I report on a subsequent closed-form model that unifies intracellular conductance and the thermodynamics of magnetization, with the membrane electric field, E{sub m}. It’s anticipated this work will compel researchers in biophysics, physical biology, and the computational neurosciences, to probe deeper into the classical and quantum features ofmore » membrane magnetization and signaling, informed by the computational features of this subsequent model.« less

  5. Dyadic Instruction for Middle School Students: Liking Promotes Learning

    PubMed Central

    Hartl, Amy C.; DeLay, Dawn; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy

    2015-01-01

    This study examines whether friendship facilitates or hinders learning in a dyadic instructional setting. Working in 80 same-sex pairs, 160 (60 girls, 100 boys) middle school students (M = 12.13 years old) were taught a new computer programming language and programmed a game. Students spent 14 to 30 (M = 22.7) hours in a programming class. At the beginning and the end of the project, each participant separately completed (a) computer programming knowledge assessments and (b) questionnaires rating their affinity for their partner. Results support the proposition that liking promotes learning: Greater partner affinity predicted greater subsequent increases in computer programming knowledge for both partners. One partner’s initial programming knowledge also positively predicted the other partner’s subsequent partner affinity. PMID:26688658

  6. Cost analysis for computer supported multiple-choice paper examinations

    PubMed Central

    Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank

    2011-01-01

    Introduction: Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Methods: Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. Results: The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. Discussion: For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam. PMID:22205913

  7. Cost analysis for computer supported multiple-choice paper examinations.

    PubMed

    Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank

    2011-01-01

    Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam.

  8. Computational challenges in modeling gene regulatory events.

    PubMed

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  9. 20 CFR 404.252 - Subsequent entitlement to benefits 12 months or more after entitlement to disability benefits ended.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing... situation, we compute your second-entitlement primary insurance amount by selecting the higher of the following: (a) New primary insurance amount. The primary insurance amount computed as of the time of your...

  10. 20 CFR 404.251 - Subsequent entitlement to benefits less than 12 months after entitlement to disability benefits...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing...) Disability before 1979; second entitlement after 1978. In this situation, we compute your second-entitlement... primary insurance amount computed for you as of the time of your second entitlement under any method for...

  11. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  12. 'Tagger' - a Mac OS X Interactive Graphical Application for Data Inference and Analysis of N-Dimensional Datasets in the Natural Physical Sciences.

    NASA Astrophysics Data System (ADS)

    Morse, P. E.; Reading, A. M.; Lueg, C.

    2014-12-01

    Pattern-recognition in scientific data is not only a computational problem but a human-observer problem as well. Human observation of - and interaction with - data visualization software can augment, select, interrupt and modify computational routines and facilitate processes of pattern and significant feature recognition for subsequent human analysis, machine learning, expert and artificial intelligence systems.'Tagger' is a Mac OS X interactive data visualisation tool that facilitates Human-Computer interaction for the recognition of patterns and significant structures. It is a graphical application developed using the Quartz Composer framework. 'Tagger' follows a Model-View-Controller (MVC) software architecture: the application problem domain (the model) is to facilitate novel ways of abstractly representing data to a human interlocutor, presenting these via different viewer modalities (e.g. chart representations, particle systems, parametric geometry) to the user (View) and enabling interaction with the data (Controller) via a variety of Human Interface Devices (HID). The software enables the user to create an arbitrary array of tags that may be appended to the visualised data, which are then saved into output files as forms of semantic metadata. Three fundamental problems that are not strongly supported by conventional scientific visualisation software are addressed:1] How to visually animate data over time, 2] How to rapidly deploy unconventional parametrically driven data visualisations, 3] How to construct and explore novel interaction models that capture the activity of the end-user as semantic metadata that can be used to computationally enhance subsequent interrogation. Saved tagged data files may be loaded into Tagger, so that tags may be tagged, if desired. Recursion opens up the possibility of refining or overlapping different types of tags, tagging a variety of different POIs or types of events, and of capturing different types of specialist observations of important or noticeable events. Other visualisations and modes of interaction will also be demonstrated, with the aim of discovering knowledge in large datasets in the natural, physical sciences. Fig.1 Wave height data from an oceanographic Wave Rider Buoy. Colors/radii are driven by wave height data.

  13. Image-Based Macro-Micro Finite Element Models of a Canine Femur with Implant Design Implications

    NASA Astrophysics Data System (ADS)

    Ghosh, Somnath; Krishnan, Ganapathi; Dyce, Jonathan

    2006-06-01

    In this paper, a comprehensive model of a bone-cement-implant assembly is developed for a canine cemented femoral prosthesis system. Various steps in this development entail profiling the canine femur contours by computed tomography (CT) scanning, computer aided design (CAD) reconstruction of the canine femur from CT images, CAD modeling of the implant from implant blue prints and CAD modeling of the interface cement. Finite element analysis of the macroscopic assembly is conducted for stress analysis in individual components of the system, accounting for variation in density and material properties in the porous bone material. A sensitivity analysis is conducted with the macroscopic model to investigate the effect of implant design variables on the stress distribution in the assembly. Subsequently, rigorous microstructural analysis of the bone incorporating the morphological intricacies is conducted. Various steps in this development include acquisition of the bone microstructural data from histological serial sectioning, stacking of sections to obtain 3D renderings of void distributions, microstructural characterization and determination of properties and, finally, microstructural stress analysis using a 3D Voronoi cell finite element method. Generation of the simulated microstructure and analysis by the 3D Voronoi cell finite element model provides a new way of modeling complex microstructures and correlating to morphological characteristics. An inverse calculation of the material parameters of bone by combining macroscopic experiments with microstructural characterization and analysis provides a new approach to evaluating properties without having to do experiments at this scale. Finally, the microstructural stresses in the femur are computed using the 3D VCFEM to study the stress distribution at the scale of the bone porosity. Significant difference is observed between the macroscopic stresses and the peak microscopic stresses at different locations.

  14. Computationally assisted screening and design of cell-interactive peptides by a cell-based assay using peptide arrays and a fuzzy neural network algorithm.

    PubMed

    Kaga, Chiaki; Okochi, Mina; Tomita, Yasuyuki; Kato, Ryuji; Honda, Hiroyuki

    2008-03-01

    We developed a method of effective peptide screening that combines experiments and computational analysis. The method is based on the concept that screening efficiency can be enhanced from even limited data by use of a model derived from computational analysis that serves as a guide to screening and combining the model with subsequent repeated experiments. Here we focus on cell-adhesion peptides as a model application of this peptide-screening strategy. Cell-adhesion peptides were screened by use of a cell-based assay of a peptide array. Starting with the screening data obtained from a limited, random 5-mer library (643 sequences), a rule regarding structural characteristics of cell-adhesion peptides was extracted by fuzzy neural network (FNN) analysis. According to this rule, peptides with unfavored residues in certain positions that led to inefficient binding were eliminated from the random sequences. In the restricted, second random library (273 sequences), the yield of cell-adhesion peptides having an adhesion rate more than 1.5-fold to that of the basal array support was significantly high (31%) compared with the unrestricted random library (20%). In the restricted third library (50 sequences), the yield of cell-adhesion peptides increased to 84%. We conclude that a repeated cycle of experiments screening limited numbers of peptides can be assisted by the rule-extracting feature of FNN.

  15. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  16. Flight-Time Identification of a UH-60A Helicopter and Slung Load

    NASA Technical Reports Server (NTRS)

    Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani

    1998-01-01

    This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.

  17. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering

    PubMed Central

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads. PMID:27482905

  18. Cloud based intelligent system for delivering health care as a service.

    PubMed

    Kaur, Pankaj Deep; Chana, Inderveer

    2014-01-01

    The promising potential of cloud computing and its convergence with technologies such as mobile computing, wireless networks, sensor technologies allows for creation and delivery of newer type of cloud services. In this paper, we advocate the use of cloud computing for the creation and management of cloud based health care services. As a representative case study, we design a Cloud Based Intelligent Health Care Service (CBIHCS) that performs real time monitoring of user health data for diagnosis of chronic illness such as diabetes. Advance body sensor components are utilized to gather user specific health data and store in cloud based storage repositories for subsequent analysis and classification. In addition, infrastructure level mechanisms are proposed to provide dynamic resource elasticity for CBIHCS. Experimental results demonstrate that classification accuracy of 92.59% is achieved with our prototype system and the predicted patterns of CPU usage offer better opportunities for adaptive resource elasticity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Numerical analysis of the effect of surface roughness on mechanical fields in polycrystalline aggregates

    NASA Astrophysics Data System (ADS)

    Guilhem, Yoann; Basseville, Stéphanie; Curtit, François; Stéphan, Jean-Michel; Cailletaud, Georges

    2018-06-01

    This paper is dedicated to the study of the influence of surface roughness on local stress and strain fields in polycrystalline aggregates. Finite element computations are performed with a crystal plasticity model on a 316L stainless steel polycrystalline material element with different roughness states on its free surface. The subsequent analysis of the plastic strain localization patterns shows that surface roughness strongly affects the plastic strain localization induced by crystallography. Nevertheless, this effect mainly takes place at the surface and vanishes under the first layer of grains, which implies the existence of a critical perturbed depth. A statistical analysis based on the plastic strain distribution obtained for different roughness levels provides a simple rule to define the size of the affected zone depending on the rough surface parameters.

  20. Gravity flow of powder in a lunar environment. Part 2: Analysis of flow initiation

    NASA Technical Reports Server (NTRS)

    Pariseau, W. G.

    1971-01-01

    A small displacement-small strain finite element technique utilizing the constant strain triangle and incremental constitutive equations for elasticplastic (media nonhardening and obeying a Coulomb yield condition) was applied to the analysis of gravity flow initiation. This was done in a V-shaped hopper containing a powder under lunar environmental conditions. Three methods of loading were examined. Of the three, the method of computing the initial state of stress in a filled hopper prior to drawdown, by adding material to the hopper layer by layer, was the best. Results of the analysis of a typical hopper problem show that the initial state of stress, the elastic moduli, and the strength parameters have an important influence on material response subsequent to the opening of the hopper outlet.

  1. Real-time development of data acquisition and analysis software for hands-on physiology education in neuroscience: G-PRIME.

    PubMed

    Lott, Gus K; Johnson, Bruce R; Bonow, Robert H; Land, Bruce R; Hoy, Ronald R

    2009-01-01

    We report on the real-time creation of an application for hands-on neurophysiology in an advanced undergraduate teaching laboratory. Enabled by the rapid software development tools included in the Matlab technical computing environment (The Mathworks, Natick, MA), a team, consisting of a neurophysiology educator and a biophysicist trained as an electrical engineer, interfaced to a course of approximately 15 students from engineering and biology backgrounds. The result is the powerful freeware data acquisition and analysis environment, "g-PRIME." The software was developed from week to week in response to curriculum demands, and student feedback. The program evolved from a simple software oscilloscope, enabling RC circuit analysis, to a suite of tools supporting analysis of neuronal excitability and synaptic transmission analysis in invertebrate model systems. The program has subsequently expanded in application to university courses, research, and high school projects in the US and abroad as free courseware.

  2. Thermal-Acoustic Analysis of a Metallic Integrated Thermal Protection System Structure

    NASA Technical Reports Server (NTRS)

    Behnke, Marlana N.; Sharma, Anurag; Przekop, Adam; Rizzi, Stephen A.

    2010-01-01

    A study is undertaken to investigate the response of a representative integrated thermal protection system structure under combined thermal, aerodynamic pressure, and acoustic loadings. A two-step procedure is offered and consists of a heat transfer analysis followed by a nonlinear dynamic analysis under a combined loading environment. Both analyses are carried out in physical degrees-of-freedom using implicit and explicit solution techniques available in the Abaqus commercial finite-element code. The initial study is conducted on a reduced-size structure to keep the computational effort contained while validating the procedure and exploring the effects of individual loadings. An analysis of a full size integrated thermal protection system structure, which is of ultimate interest, is subsequently presented. The procedure is demonstrated to be a viable approach for analysis of spacecraft and hypersonic vehicle structures under a typical mission cycle with combined loadings characterized by largely different time-scales.

  3. Computed tomography identifies patients at high risk for stroke after transient ischemic attack/nondisabling stroke: prospective, multicenter cohort study.

    PubMed

    Wasserman, Jason K; Perry, Jeffrey J; Sivilotti, Marco L A; Sutherland, Jane; Worster, Andrew; Émond, Marcel; Jin, Albert Y; Oczkowski, Wieslaw J; Sahlas, Demetrios J; Murray, Heather; MacKey, Ariane; Verreault, Steve; Wells, George A; Dowlatshahi, Dar; Stotts, Grant; Stiell, Ian G; Sharma, Mukul

    2015-01-01

    Ischemia on computed tomography (CT) is associated with subsequent stroke after transient ischemic attack. This study assessed CT findings of acute ischemia, chronic ischemia, or microangiopathy for predicting subsequent stroke after transient ischemic attack. This prospective cohort study enrolled patients with transient ischemic attack or nondisabling stroke that had CT scanning within 24 hours. Primary outcome was subsequent stroke within 90 days. Secondary outcomes were stroke at ≤2 or >2 days. CT findings were classified as ischemia present or absent and acute or chronic or microangiopathy. Analysis used Fisher exact test and multivariate logistic regression. A total of 2028 patients were included; 814 had ischemic changes on CT. Subsequent stroke rate was 3.4% at 90 days and 1.5% at ≤2 days. Stroke risk was greater if baseline CT showed acute ischemia alone (10.6%; P=0.002), acute+chronic ischemia (17.4%; P=0.007), acute ischemia+microangiopathy (17.6%; P=0.019), or acute+chronic ischemia+microangiopathy (25.0%; P=0.029). Logistic regression found acute ischemia alone (odds ratio [OR], 2.61; 95% confidence interval [CI[, 1.22-5.57), acute+chronic ischemia (OR, 5.35; 95% CI, 1.71-16.70), acute ischemia+microangiopathy (OR, 4.90; 95% CI, 1.33-18.07), or acute+chronic ischemia+microangiopathy (OR, 8.04; 95% CI, 1.52-42.63) was associated with a greater risk at 90 days, whereas acute+chronic ischemia (OR, 10.78; 95% CI, 2.93-36.68), acute ischemia+microangiopathy (OR, 8.90; 95% CI, 1.90-41.60), and acute+chronic ischemia+microangiopathy (OR, 23.66; 95% CI, 4.34-129.03) had greater risk at ≤2 days. Only acute ischemia (OR, 2.70; 95% CI, 1.01-7.18; P=0.047) was associated with a greater risk at >2 days. In patients with transient ischemic attack/nondisabling stroke, CT evidence of acute ischemia alone or acute ischemia with chronic ischemia or microangiopathy was associated with increased subsequent stroke risk within 90 days. © 2014 American Heart Association, Inc.

  4. Deep Learning in Gastrointestinal Endoscopy.

    PubMed

    Patel, Vivek; Armstrong, David; Ganguli, Malika; Roopra, Sandeep; Kantipudi, Neha; Albashir, Siwar; Kamath, Markad V

    2016-01-01

    Gastrointestinal (GI) endoscopy is used to inspect the lumen or interior of the GI tract for several purposes, including, (1) making a clinical diagnosis, in real time, based on the visual appearances; (2) taking targeted tissue samples for subsequent histopathological examination; and (3) in some cases, performing therapeutic interventions targeted at specific lesions. GI endoscopy is therefore predicated on the assumption that the operator-the endoscopist-is able to identify and characterize abnormalities or lesions accurately and reproducibly. However, as in other areas of clinical medicine, such as histopathology and radiology, many studies have documented marked interobserver and intraobserver variability in lesion recognition. Thus, there is a clear need and opportunity for techniques or methodologies that will enhance the quality of lesion recognition and diagnosis and improve the outcomes of GI endoscopy. Deep learning models provide a basis to make better clinical decisions in medical image analysis. Biomedical image segmentation, classification, and registration can be improved with deep learning. Recent evidence suggests that the application of deep learning methods to medical image analysis can contribute significantly to computer-aided diagnosis. Deep learning models are usually considered to be more flexible and provide reliable solutions for image analysis problems compared to conventional computer vision models. The use of fast computers offers the possibility of real-time support that is important for endoscopic diagnosis, which has to be made in real time. Advanced graphics processing units and cloud computing have also favored the use of machine learning, and more particularly, deep learning for patient care. This paper reviews the rapidly evolving literature on the feasibility of applying deep learning algorithms to endoscopic imaging.

  5. Enhancing Next-Generation Sequencing-Guided Cancer Care Through Cognitive Computing.

    PubMed

    Patel, Nirali M; Michelini, Vanessa V; Snell, Jeff M; Balu, Saianand; Hoyle, Alan P; Parker, Joel S; Hayward, Michele C; Eberhard, David A; Salazar, Ashley H; McNeillie, Patrick; Xu, Jia; Huettner, Claudia S; Koyama, Takahiko; Utro, Filippo; Rhrissorrakrai, Kahn; Norel, Raquel; Bilal, Erhan; Royyuru, Ajay; Parida, Laxmi; Earp, H Shelton; Grilley-Olson, Juneko E; Hayes, D Neil; Harvey, Stephen J; Sharpless, Norman E; Kim, William Y

    2018-02-01

    Using next-generation sequencing (NGS) to guide cancer therapy has created challenges in analyzing and reporting large volumes of genomic data to patients and caregivers. Specifically, providing current, accurate information on newly approved therapies and open clinical trials requires considerable manual curation performed mainly by human "molecular tumor boards" (MTBs). The purpose of this study was to determine the utility of cognitive computing as performed by Watson for Genomics (WfG) compared with a human MTB. One thousand eighteen patient cases that previously underwent targeted exon sequencing at the University of North Carolina (UNC) and subsequent analysis by the UNCseq informatics pipeline and the UNC MTB between November 7, 2011, and May 12, 2015, were analyzed with WfG, a cognitive computing technology for genomic analysis. Using a WfG-curated actionable gene list, we identified additional genomic events of potential significance (not discovered by traditional MTB curation) in 323 (32%) patients. The majority of these additional genomic events were considered actionable based upon their ability to qualify patients for biomarker-selected clinical trials. Indeed, the opening of a relevant clinical trial within 1 month prior to WfG analysis provided the rationale for identification of a new actionable event in nearly a quarter of the 323 patients. This automated analysis took <3 minutes per case. These results demonstrate that the interpretation and actionability of somatic NGS results are evolving too rapidly to rely solely on human curation. Molecular tumor boards empowered by cognitive computing could potentially improve patient care by providing a rapid, comprehensive approach for data analysis and consideration of up-to-date availability of clinical trials. The results of this study demonstrate that the interpretation and actionability of somatic next-generation sequencing results are evolving too rapidly to rely solely on human curation. Molecular tumor boards empowered by cognitive computing can significantly improve patient care by providing a fast, cost-effective, and comprehensive approach for data analysis in the delivery of precision medicine. Patients and physicians who are considering enrollment in clinical trials may benefit from the support of such tools applied to genomic data. © AlphaMed Press 2017.

  6. An interactive web-based application for Comprehensive Analysis of RNAi-screen Data.

    PubMed

    Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B; Germain, Ronald N; Smith, Jennifer A; Simpson, Kaylene J; Martin, Scott E; Buehler, Eugen; Beuhler, Eugen; Fraser, Iain D C

    2016-02-23

    RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment.

  7. An interactive web-based application for Comprehensive Analysis of RNAi-screen Data

    PubMed Central

    Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B.; Germain, Ronald N.; Smith, Jennifer A.; Simpson, Kaylene J.; Martin, Scott E.; Beuhler, Eugen; Fraser, Iain D. C.

    2016-01-01

    RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment. PMID:26902267

  8. Controller design via structural reduced modeling by FETM

    NASA Technical Reports Server (NTRS)

    Yousuff, Ajmal

    1987-01-01

    The Finite Element-Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not address the issues of control design. To overcome these, a modification of the FETM method has been developed. The new method easily produces reduced models tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the design procedures for output feedback, constrained compensation, and decentralized control. This report presents the development of the new method, generation of reduced models by this method, their properties, and the role of these reduced models in control design. Examples are included to illustrate the methodology.

  9. Computational challenges in modeling gene regulatory events

    PubMed Central

    Pataskar, Abhijeet; Tiwari, Vijay K.

    2016-01-01

    ABSTRACT Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating “omics” data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology. PMID:27390891

  10. Computer Center CDC Libraries/NSRD (Subprograms).

    DTIC Science & Technology

    1984-06-01

    VALUES Y - ARRAY OR CORRESPONDING Y-VALUES N - NUMBER OF VALUES CM REQUIRED: IOOB ERROR MESSAGE ’ L=XXXXX, X=X.XXXXXXX E+YY, X NOT MONOTONE STOP SELF ...PARAMETERS (SUBSEQUENT REPORTS MAY BE UNSOLICITED) . PCRTP1 - REQUEST TERMINAL PARAMETERS (SUBSEQUENT REPORTS ONLY IN RESPOSE TO HOST REQUEST) DA - REQUEST

  11. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  12. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    PubMed Central

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe

    2015-01-01

    Purpose We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). Materials and Methods The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. Results VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). Conclusion It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method. PMID:25793178

  13. Detailed Analysis of the Binding Mode of Vanilloids to Transient Receptor Potential Vanilloid Type I (TRPV1) by a Mutational and Computational Study

    PubMed Central

    Mori, Yoshikazu; Ogawa, Kazuo; Warabi, Eiji; Yamamoto, Masahiro; Hirokawa, Takatsugu

    2016-01-01

    Transient receptor potential vanilloid type 1 (TRPV1) is a non-selective cation channel and a multimodal sensor protein. Since the precise structure of TRPV1 was obtained by electron cryo-microscopy, the binding mode of representative agonists such as capsaicin and resiniferatoxin (RTX) has been extensively characterized; however, detailed information on the binding mode of other vanilloids remains lacking. In this study, mutational analysis of human TRPV1 was performed, and four agonists (capsaicin, RTX, [6]-shogaol and [6]-gingerol) were used to identify amino acid residues involved in ligand binding and/or modulation of proton sensitivity. The detailed binding mode of each ligand was then simulated by computational analysis. As a result, three amino acids (L518, F591 and L670) were newly identified as being involved in ligand binding and/or modulation of proton sensitivity. In addition, in silico docking simulation and a subsequent mutational study suggested that [6]-gingerol might bind to and activate TRPV1 in a unique manner. These results provide novel insights into the binding mode of various vanilloids to the channel and will be helpful in developing a TRPV1 modulator. PMID:27606946

  14. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  15. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Technical Reports Server (NTRS)

    Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew

    2017-01-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  16. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  17. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System.

    PubMed

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-06-27

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.

  18. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System

    PubMed Central

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-01-01

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946

  19. Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy

    NASA Astrophysics Data System (ADS)

    Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.

    2017-10-01

    Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.

  20. Error Estimate of the Ares I Vehicle Longitudinal Aerodynamic Characteristics Based on Turbulent Navier-Stokes Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2011-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on the unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, with an assumption that the flow is fully turbulent over the entire vehicle. This effort was designed to complement the prior computational activities conducted over the past five years in support of the Ares I Project with the emphasis on the vehicle s last design cycle designated as the A106 configuration. Due to a lack of flight data for this particular design s outer mold line, the initial vehicle s aerodynamic predictions and the associated error estimates were first assessed and validated against the available experimental data at representative wind tunnel flow conditions pertinent to the ascent phase of the trajectory without including any propulsion effects. Subsequently, the established procedures were then applied to obtain the longitudinal aerodynamic predictions at the selected flight flow conditions. Sample computed results and the correlations with the experimental measurements are presented. In addition, the present analysis includes the relevant data to highlight the balance between the prediction accuracy against the grid size and, thus, the corresponding computer resource requirements for the computations at both wind tunnel and flight flow conditions. NOTE: Some details have been removed from selected plots and figures in compliance with the sensitive but unclassified (SBU) restrictions. However, the content still conveys the merits of the technical approach and the relevant results.

  1. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    PubMed

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. Using Student Writing and Lexical Analysis to Reveal Student Thinking about the Role of Stop Codons in the Central Dogma

    PubMed Central

    Prevost, Luanna B.; Smith, Michelle K.; Knight, Jennifer K.

    2016-01-01

    Previous work has shown that students have persistent difficulties in understanding how central dogma processes can be affected by a stop codon mutation. To explore these difficulties, we modified two multiple-choice questions from the Genetics Concept Assessment into three open-ended questions that asked students to write about how a stop codon mutation potentially impacts replication, transcription, and translation. We then used computer-assisted lexical analysis combined with human scoring to categorize student responses. The lexical analysis models showed high agreement with human scoring, demonstrating that this approach can be successfully used to analyze large numbers of student written responses. The results of this analysis show that students’ ideas about one process in the central dogma can affect their thinking about subsequent and previous processes, leading to mixed models of conceptual understanding. PMID:27909016

  3. Aeroservoelastic and Flight Dynamics Analysis Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Arena, Andrew S., Jr.

    1999-01-01

    This document in large part is based on the Masters Thesis of Cole Stephens. The document encompasses a variety of technical and practical issues involved when using the STARS codes for Aeroservoelastic analysis of vehicles. The document covers in great detail a number of technical issues and step-by-step details involved in the simulation of a system where aerodynamics, structures and controls are tightly coupled. Comparisons are made to a benchmark experimental program conducted at NASA Langley. One of the significant advantages of the methodology detailed is that as a result of the technique used to accelerate the CFD-based simulation, a systems model is produced which is very useful for developing the control law strategy, and subsequent high-speed simulations.

  4. CMSC-130 Introductory Computer Science, Lecture Notes

    DTIC Science & Technology

    1993-07-01

    Introductory Computer Science lecture notes are used in the classroom for teaching CMSC 130, an introductory computer science course , using the ...Unit Testing 2. The Syntax Of Subunits Will Be Studied In The Subsequent Course CMSC130 -5- Lecture 11 TOP-DOWN TESTING Data Processor Procedure...used in the preparation of these lecture notes: Reference Manual For The Ada Prosramming Language, ANSI/MIL-STD

  5. Resetting Educational Technology Coursework for Pre-Service Teachers: A Computational Thinking Approach to the Development of Technological Pedagogical Content Knowledge (TPACK)

    ERIC Educational Resources Information Center

    Mouza, Chrystalla; Yang, Hui; Pan, Yi-Cheng; Ozden, Sule Yilmaz; Pollock, Lori

    2017-01-01

    This study presents the design of an educational technology course for pre-service teachers specific to incorporating computational thinking in K-8 classroom settings. Subsequently, it examines how participation in the course influences pre-service teachers' dispositions and knowledge of computational thinking concepts and the ways in which such…

  6. Computed microtomography visualization and quantification of mouse ischemic brain lesion by nonionic radio contrast agents

    PubMed Central

    Dobrivojević, Marina; Bohaček, Ivan; Erjavec, Igor; Gorup, Dunja; Gajović, Srećko

    2013-01-01

    Aim To explore the possibility of brain imaging by microcomputed tomography (microCT) using x-ray contrasting methods to visualize mouse brain ischemic lesions after middle cerebral artery occlusion (MCAO). Methods Isolated brains were immersed in ionic or nonionic radio contrast agent (RCA) for 5 days and subsequently scanned using microCT scanner. To verify whether ex-vivo microCT brain images can be used to characterize ischemic lesions, they were compared to Nissl stained serial histological sections of the same brains. To verify if brains immersed in RCA may be used afterwards for other methods, subsequent immunofluorescent labeling with anti-NeuN was performed. Results Nonionic RCA showed better gray to white matter contrast in the brain, and therefore was selected for further studies. MicroCT measurement of ischemic lesion size and cerebral edema significantly correlated with the values determined by Nissl staining (ischemic lesion size: P=0.0005; cerebral edema: P=0.0002). Brain immersion in nonionic RCA did not affect subsequent immunofluorescent analysis and NeuN immunoreactivity. Conclusion MicroCT method was proven to be suitable for delineation of the ischemic lesion from the non-infarcted tissue, and quantification of lesion volume and cerebral edema. PMID:23444240

  7. Computed microtomography visualization and quantification of mouse ischemic brain lesion by nonionic radio contrast agents.

    PubMed

    Dobrivojević, Marina; Bohaček, Ivan; Erjavec, Igor; Gorup, Dunja; Gajović, Srećko

    2013-02-01

    To explore the possibility of brain imaging by microcomputed tomography (microCT) using x-ray contrasting methods to visualize mouse brain ischemic lesions after middle cerebral artery occlusion (MCAO). Isolated brains were immersed in ionic or nonionic radio contrast agent (RCA) for 5 days and subsequently scanned using microCT scanner. To verify whether ex-vivo microCT brain images can be used to characterize ischemic lesions, they were compared to Nissl stained serial histological sections of the same brains. To verify if brains immersed in RCA may be used afterwards for other methods, subsequent immunofluorescent labeling with anti-NeuN was performed. Nonionic RCA showed better gray to white matter contrast in the brain, and therefore was selected for further studies. MicroCT measurement of ischemic lesion size and cerebral edema significantly correlated with the values determined by Nissl staining (ischemic lesion size: P=0.0005; cerebral edema: P=0.0002). Brain immersion in nonionic RCA did not affect subsequent immunofluorescent analysis and NeuN immunoreactivity. MicroCT method was proven to be suitable for delineation of the ischemic lesion from the non-infarcted tissue, and quantification of lesion volume and cerebral edema.

  8. The effects of home computer access and social capital on mathematics and science achievement among Asian-American high school students in the NELS:88 data set

    NASA Astrophysics Data System (ADS)

    Quigley, Mark Declan

    The purpose of this researcher was to examine specific environmental, educational, and demographic factors and their influence on mathematics and science achievement. In particular, the researcher ascertained the interconnections of home computer access and social capital, with Asian American students and the effect on mathematics and science achievement. Coleman's theory on social capital and parental influence was used as a basis for the analysis of data. Subjects for this study were the base year students from the National Education Longitudinal Study of 1988 (NELS:88) and the subsequent follow-up survey data in 1990, 1992, and 1994. The approximate sample size for this study is 640 ethnic Asians from the NELS:88 database. The analysis was a longitudinal study based on the Student and Parent Base Year responses and the Second Follow-up survey of 1992, when the subjects were in 12th grade. Achievement test results from the NELS:88 data were used to measure achievement in mathematics and science. The NELS:88 test battery was developed to measure both individual status and a student's growth in a number of achievement areas. The subject's responses were analyzed by principal components factor analysis, weights, effect sizes, hierarchial regression analysis, and PLSPath Analysis. The results of this study were that prior ability in mathematics and science is a major influence in the student's educational achievement. Findings from the study support the view that home computer access has a negative direct effect on mathematics and science achievement for both Asian American males and females. None of the social capital factors in the study had either a negative or positive direct effect on mathematics and science achievement although some indirect effects were found. Suggestions were made toward increasing parental involvement in their children's academic endeavors. Computer access in the home should be considered related to television viewing and should be closely monitored by the parents to promote educational uses.

  9. ATLAS, an integrated structural analysis and design system. Volume 1: ATLAS user's guide

    NASA Technical Reports Server (NTRS)

    Dreisbach, R. L. (Editor)

    1979-01-01

    Some of the many analytical capabilities provided by the ATLAS Version 4.0 System in the logical sequence are described in which model-definition data are prepared and the subsequent computer job is executed. The example data presented and the fundamental technical considerations that are highlighted can be used as guides during the problem solving process. This guide does not describe the details of the ATLAS capabilities, but provides an introduction to the new user of ATLAS to the level at which the complete array of capabilities described in the ATLAS User's Manual can be exploited fully.

  10. Evaluation of standard radiation atmosphere aerosol models for a coastal environment

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Suttles, J. T.; Sebacher, D. I.; Fuller, W. H.; Lecroy, S. R.

    1986-01-01

    Calculations are compared with data from an experiment to evaluate the utility of standard radiation atmosphere (SRA) models for defining aerosol properties in atmospheric radiation computations. Initial calculations with only SRA aerosols in a four-layer atmospheric column simulation allowed a sensitivity study and the detection of spectral trends in optical depth, which differed from measurements. Subsequently, a more detailed analysis provided a revision in the stratospheric layer, which brought calculations in line with both optical depth and skylight radiance data. The simulation procedure allows determination of which atmospheric layers influence both downwelling and upwelling radiation spectra.

  11. Finite element formulation with embedded weak discontinuities for strain localization under dynamic conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.

    Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.

  12. Finite element formulation with embedded weak discontinuities for strain localization under dynamic conditions

    DOE PAGES

    Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.; ...

    2017-09-13

    Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.

  13. Structural tailoring of counter rotation propfans

    NASA Technical Reports Server (NTRS)

    Brown, Kenneth W.; Hopkins, D. A.

    1989-01-01

    The STAT program was designed for the optimization of single rotation, tractor propfan designs. New propfan designs, however, generally consist of two counter rotating propfan rotors. STAT is constructed to contain two levels of analysis. An interior loop, consisting of accurate, efficient approximate analyses, is used to perform the primary propfan optimization. Once an optimum design has been obtained, a series of refined analyses are conducted. These analyses, while too computer time expensive for the optimization loop, are of sufficient accuracy to validate the optimized design. Should the design prove to be unacceptable, provisions are made for recalibration of the approximate analyses, for subsequent reoptimization.

  14. Modeling of Commercial Turbofan Engine With Ice Crystal Ingestion: Follow-On

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Coennen, Ryan

    2014-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which is ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. In a previous study, analysis of select PSL test data points helped to calibrate the engine icing computational tool to assess the risk of ice accretion. This current study is a continuation of that data analysis effort. The study focused on tracking the variations in wet bulb temperature and ice particle melt ratio through the engine core flow path. The results from this study have identified trends, while also identifying gaps in understanding as to how the local wet bulb temperature and melt ratio affects the risk of ice accretion and subsequent engine behavior.

  15. Modeling of Commercial Turbofan Engine with Ice Crystal Ingestion; Follow-On

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Coennen, Ryan

    2014-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was degraded engine performance, and one or more of the following: loss of thrust control (roll back), compressor surge or stall, and flameout of the combustor. As ice crystals are ingested into the fan and low pressure compression system, the increase in air temperature causes a portion of the ice crystals to melt. It is hypothesized that this allows the ice-water mixture to cover the metal surfaces of the compressor stationary components which leads to ice accretion through evaporative cooling. Ice accretion causes a blockage which subsequently results in the deterioration in performance of the compressor and engine. The focus of this research is to apply an engine icing computational tool to simulate the flow through a turbofan engine and assess the risk of ice accretion. The tool is comprised of an engine system thermodynamic cycle code, a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor flow path, without modeling the actual ice accretion. A commercial turbofan engine which has previously experienced icing events during operation in a high altitude ice crystal environment has been tested in the Propulsion Systems Laboratory (PSL) altitude test facility at NASA Glenn Research Center. The PSL has the capability to produce a continuous ice cloud which is ingested by the engine during operation over a range of altitude conditions. The PSL test results confirmed that there was ice accretion in the engine due to ice crystal ingestion, at the same simulated altitude operating conditions as experienced previously in flight. The computational tool was utilized to help guide a portion of the PSL testing, and was used to predict ice accretion could also occur at significantly lower altitudes. The predictions were qualitatively verified by subsequent testing of the engine in the PSL. In a previous study, analysis of select PSL test data points helped to calibrate the engine icing computational tool to assess the risk of ice accretion. This current study is a continuation of that data analysis effort. The study focused on tracking the variations in wet bulb temperature and ice particle melt ratio through the engine core flow path. The results from this study have identified trends, while also identifying gaps in understanding as to how the local wet bulb temperature and melt ratio affects the risk of ice accretion and subsequent engine behavior.

  16. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  17. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    PubMed

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  18. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification

    PubMed Central

    Feng, Yang; Jiang, Jiancheng; Tong, Xin

    2015-01-01

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970

  19. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  20. Active life expectancy from annual follow-up data with missing responses.

    PubMed

    Izmirlian, G; Brock, D; Ferrucci, L; Phillips, C

    2000-03-01

    Active life expectancy (ALE) at a given age is defined as the expected remaining years free of disability. In this study, three categories of health status are defined according to the ability to perform activities of daily living independently. Several studies have used increment-decrement life tables to estimate ALE, without error analysis, from only a baseline and one follow-up interview. The present work conducts an individual-level covariate analysis using a three-state Markov chain model for multiple follow-up data. Using a logistic link, the model estimates single-year transition probabilities among states of health, accounting for missing interviews. This approach has the advantages of smoothing subsequent estimates and increased power by using all follow-ups. We compute ALE and total life expectancy from these estimated single-year transition probabilities. Variance estimates are computed using the delta method. Data from the Iowa Established Population for the Epidemiologic Study of the Elderly are used to test the effects of smoking on ALE on all 5-year age groups past 65 years, controlling for sex and education.

  1. Simulating Vibrations in a Complex Loaded Structure

    NASA Technical Reports Server (NTRS)

    Cao, Tim T.

    2005-01-01

    The Dynamic Response Computation (DIRECT) computer program simulates vibrations induced in a complex structure by applied dynamic loads. Developed to enable rapid analysis of launch- and landing- induced vibrations and stresses in a space shuttle, DIRECT also can be used to analyze dynamic responses of other structures - for example, the response of a building to an earthquake, or the response of an oil-drilling platform and attached tanks to large ocean waves. For a space-shuttle simulation, the required input to DIRECT includes mathematical models of the space shuttle and its payloads, and a set of forcing functions that simulates launch and landing loads. DIRECT can accommodate multiple levels of payload attachment and substructure as well as nonlinear dynamic responses of structural interfaces. DIRECT combines the shuttle and payload models into a single structural model, to which the forcing functions are then applied. The resulting equations of motion are reduced to an optimum set and decoupled into a unique format for simulating dynamics. During the simulation, maximum vibrations, loads, and stresses are monitored and recorded for subsequent analysis to identify structural deficiencies in the shuttle and/or payloads.

  2. Thermal analysis of the intact mandibular premolar: a finite element analysis.

    PubMed

    Oskui, I Z; Ashtiani, M N; Hashemi, A; Jafarzadeh, H

    2013-09-01

    To obtain temperature distribution data through human teeth focusing on the pulp-dentine junction (PDJ). A three-dimensional tooth model was reconstructed using computer-aided design software from computed tomographic images. Subsequently, temperature distribution was numerically determined through the tooth for three different heat loads. Loading type I was equivalent to a 60° C mouth temperature for 1 s. Loading type II started with a 60° C mouth temperature, decreasing linearly to 37° C over 10 s. Loading type III repeated the pattern of type II in three consecutive cycles, with a 5 s resting time between cycles. The maximum temperatures of the pulp were 37.9° C, 39.0° C and 41.2° C for loading types I, II, and III, respectively. The largest temperature rise occurred with the cyclic loading, that is, type III. For the heat loads considered, the predicted peak temperatures at the PDJ were less than the reported temperature thresholds of irreversible pulpal damage. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  3. Anomaly Detection in Large Sets of High-Dimensional Symbol Sequences

    NASA Technical Reports Server (NTRS)

    Budalakoti, Suratna; Srivastava, Ashok N.; Akella, Ram; Turkov, Eugene

    2006-01-01

    This paper addresses the problem of detecting and describing anomalies in large sets of high-dimensional symbol sequences. The approach taken uses unsupervised clustering of sequences using the normalized longest common subsequence (LCS) as a similarity measure, followed by detailed analysis of outliers to detect anomalies. As the LCS measure is expensive to compute, the first part of the paper discusses existing algorithms, such as the Hunt-Szymanski algorithm, that have low time-complexity. We then discuss why these algorithms often do not work well in practice and present a new hybrid algorithm for computing the LCS that, in our tests, outperforms the Hunt-Szymanski algorithm by a factor of five. The second part of the paper presents new algorithms for outlier analysis that provide comprehensible indicators as to why a particular sequence was deemed to be an outlier. The algorithms provide a coherent description to an analyst of the anomalies in the sequence, compared to more normal sequences. The algorithms we present are general and domain-independent, so we discuss applications in related areas such as anomaly detection.

  4. Science Support for Space-Based Droplet Combustion: Drop Tower Experiments and Detailed Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Marchese, Anthony J.; Dryer, Frederick L.

    1997-01-01

    This program supports the engineering design, data analysis, and data interpretation requirements for the study of initially single component, spherically symmetric, isolated droplet combustion studies. Experimental emphasis is on the study of simple alcohols (methanol, ethanol) and alkanes (n-heptane, n-decane) as fuels with time dependent measurements of drop size, flame-stand-off, liquid-phase composition, and finally, extinction. Experiments have included bench-scale studies at Princeton, studies in the 2.2 and 5.18 drop towers at NASA-LeRC, and both the Fiber Supported Droplet Combustion (FSDC-1, FSDC-2) and the free Droplet Combustion Experiment (DCE) studies aboard the shuttle. Test matrix and data interpretation are performed through spherically-symmetric, time-dependent numerical computations which embody detailed sub-models for physical and chemical processes. The computed burning rate, flame stand-off, and extinction diameter are compared with the respective measurements for each individual experiment. In particular, the data from FSDC-1 and subsequent space-based experiments provide the opportunity to compare all three types of data simultaneously with the computed parameters. Recent numerical efforts are extending the computational tools to consider time dependent, axisymmetric 2-dimensional reactive flow situations.

  5. Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions

    PubMed Central

    Kanjirathumkal, Cibile K.; Mohammed, Sameer S.

    2014-01-01

    Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175

  6. Transient Three-Dimensional Analysis of Side Load in Liquid Rocket Engine Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2004-01-01

    Three-dimensional numerical investigations on the nozzle start-up side load physics were performed. The objective of this study is to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, and pressure-based computational fluid dynamics formulation, and a simulated inlet condition based on a system calculation. Finite-rate chemistry was used throughout the study so that combustion effect is always included, and the effect of wall cooling on side load physics is studied. The side load physics captured include the afterburning wave, transition from free- shock to restricted-shock separation, and lip Lambda shock oscillation. With the adiabatic nozzle, free-shock separation reappears after the transition from free-shock separation to restricted-shock separation, and the subsequent flow pattern of the simultaneous free-shock and restricted-shock separations creates a very asymmetric Mach disk flow. With the cooled nozzle, the more symmetric restricted-shock separation persisted throughout the start-up transient after the transition, leading to an overall lower side load than that of the adiabatic nozzle. The tepee structures corresponding to the maximum side load were addressed.

  7. Outcomes of non-invasive diagnostic modalities for the detection of coronary artery disease: network meta-analysis of diagnostic randomised controlled trials

    PubMed Central

    Siontis, George CM; Mavridis, Dimitris; Greenwood, John P; Coles, Bernadette; Nikolakopoulou, Adriani; Jüni, Peter; Salanti, Georgia

    2018-01-01

    Abstract Objective To evaluate differences in downstream testing, coronary revascularisation, and clinical outcomes following non-invasive diagnostic modalities used to detect coronary artery disease. Design Systematic review and network meta-analysis. Data sources Medline, Medline in process, Embase, Cochrane Library for clinical trials, PubMed, Web of Science, SCOPUS, WHO International Clinical Trials Registry Platform, and Clinicaltrials.gov. Eligibility criteria for selecting studies Diagnostic randomised controlled trials comparing non-invasive diagnostic modalities in patients presenting with symptoms suggestive of low risk acute coronary syndrome or stable coronary artery disease. Data synthesis A random effects network meta-analysis synthesised available evidence from trials evaluating the effect of non-invasive diagnostic modalities on downstream testing and patient oriented outcomes in patients with suspected coronary artery disease. Modalities included exercise electrocardiograms, stress echocardiography, single photon emission computed tomography-myocardial perfusion imaging, real time myocardial contrast echocardiography, coronary computed tomographic angiography, and cardiovascular magnetic resonance. Unpublished outcome data were obtained from 11 trials. Results 18 trials of patients with low risk acute coronary syndrome (n=11 329) and 12 trials of those with suspected stable coronary artery disease (n=22 062) were included. Among patients with low risk acute coronary syndrome, stress echocardiography, cardiovascular magnetic resonance, and exercise electrocardiograms resulted in fewer invasive referrals for coronary angiography than coronary computed tomographic angiography (odds ratio 0.28 (95% confidence interval 0.14 to 0.57), 0.32 (0.15 to 0.71), and 0.53 (0.28 to 1.00), respectively). There was no effect on the subsequent risk of myocardial infarction, but estimates were imprecise. Heterogeneity and inconsistency were low. In patients with suspected stable coronary artery disease, an initial diagnostic strategy of stress echocardiography or single photon emission computed tomography-myocardial perfusion imaging resulted in fewer downstream tests than coronary computed tomographic angiography (0.24 (0.08 to 0.74) and 0.57 (0.37 to 0.87), respectively). However, exercise electrocardiograms yielded the highest downstream testing rate. Estimates for death and myocardial infarction were imprecise without clear discrimination between strategies. Conclusions For patients with low risk acute coronary syndrome, an initial diagnostic strategy of stress echocardiography or cardiovascular magnetic resonance is associated with fewer referrals for invasive coronary angiography and revascularisation procedures than non-invasive anatomical testing, without apparent impact on the future risk of myocardial infarction. For suspected stable coronary artery disease, there was no clear discrimination between diagnostic strategies regarding the subsequent need for invasive coronary angiography, and differences in the risk of myocardial infarction cannot be ruled out. Systematic review registration PROSPERO registry no CRD42016049442. PMID:29467161

  8. Magnetomotive laser speckle imaging

    PubMed Central

    Kim, Jeehyun; Oh, Junghwan; Choi, Bernard

    2010-01-01

    Laser speckle imaging (LSI) involves analysis of reflectance images collected during coherent optical excitation of an object to compute wide-field maps of tissue blood flow. An intrinsic limitation of LSI for resolving microvascular architecture is that its signal depends on relative motion of interrogated red blood cells. Hence, with LSI, small-diameter arterioles, venules, and capillaries are difficult to resolve due to the slow flow speeds associated with such vasculature. Furthermore, LSI characterization of subsurface blood flow is subject to blurring due to scattering, further limiting the ability of LSI to resolve or quantify blood flow in small vessels. Here, we show that magnetic activation of superparamagnetic iron oxide (SPIO) nanoparticles modulate the speckle flow index (SFI) values estimated from speckle contrast analysis of collected images. With application of an ac magnetic field to a solution of stagnant SPIO particles, an apparent increase in SFI is induced. Furthermore, with application of a focused dc magnetic field, a focal decrease in SFI values is induced. Magnetomotive LSI may enable wide-field mapping of suspicious tissue regions, enabling subsequent high-resolution optical interrogation of these regions. Similarly, subsequent photoactivation of intravascular SPIO nanoparticles could then be performed to induce selective photothermal destruction of unwanted vasculature. PMID:20210436

  9. Automatic control system for uniformly paving iron ore pellets

    NASA Astrophysics Data System (ADS)

    Wang, Bowen; Qian, Xiaolong

    2014-05-01

    In iron and steelmaking industry, iron ore pellet qualities are crucial to end-product properties, manufacturing costs and waste emissions. Uniform pellet pavements on the grate machine are a fundamental prerequisite to ensure even heat-transfer and pellet induration successively influences performance of the following metallurgical processes. This article presents an automatic control system for uniformly paving green pellets on the grate, via a mechanism mainly constituted of a mechanical linkage, a swinging belt, a conveyance belt and a grate. Mechanism analysis illustrates that uniform pellet pavements demand the frontend of the swinging belt oscillate at a constant angular velocity. Subsequently, kinetic models are formulated to relate oscillatory movements of the swinging belt's frontend to rotations of a crank link driven by a motor. On basis of kinetic analysis of the pellet feeding mechanism, a cubic B-spline model is built for numerically computing discrete frequencies to be modulated during a motor rotation. Subsequently, the pellet feeding control system is presented in terms of compositional hardware and software components, and their functional relationships. Finally, pellet feeding experiments are carried out to demonstrate that the control system is effective, reliable and superior to conventional methods.

  10. Two dimensional kinetic analysis of electrostatic harmonic plasma waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca-Pongutá, E. C.; Ziebell, L. F.; Gaelzer, R.

    2016-06-15

    Electrostatic harmonic Langmuir waves are virtual modes excited in weakly turbulent plasmas, first observed in early laboratory beam-plasma experiments as well as in rocket-borne active experiments in space. However, their unequivocal presence was confirmed through computer simulated experiments and subsequently theoretically explained. The peculiarity of harmonic Langmuir waves is that while their existence requires nonlinear response, their excitation mechanism and subsequent early time evolution are governed by essentially linear process. One of the unresolved theoretical issues regards the role of nonlinear wave-particle interaction process over longer evolution time period. Another outstanding issue is that existing theories for these modes aremore » limited to one-dimensional space. The present paper carries out two dimensional theoretical analysis of fundamental and (first) harmonic Langmuir waves for the first time. The result shows that harmonic Langmuir wave is essentially governed by (quasi)linear process and that nonlinear wave-particle interaction plays no significant role in the time evolution of the wave spectrum. The numerical solutions of the two-dimensional wave spectra for fundamental and harmonic Langmuir waves are also found to be consistent with those obtained by direct particle-in-cell simulation method reported in the literature.« less

  11. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial.

    PubMed

    Kron, Frederick W; Fetters, Michael D; Scerbo, Mark W; White, Casey B; Lypson, Monica L; Padilla, Miguel A; Gliva-McConvey, Gayle A; Belfore, Lee A; West, Temple; Wallace, Amelia M; Guetterman, Timothy C; Schleicher, Lauren S; Kennedy, Rebecca A; Mangrulkar, Rajesh S; Cleary, James F; Marsella, Stacy C; Becker, Daniel M

    2017-04-01

    To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group's experiences and learning preferences. A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR's intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. MPathic-VR's virtual human simulation offers an effective and engaging means of advanced communication training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial

    PubMed Central

    Kron, Frederick W.; Fetters, Michael D.; Scerbo, Mark W.; White, Casey B.; Lypson, Monica L.; Padilla, Miguel A.; Gliva-McConvey, Gayle A.; Belfore, Lee A.; West, Temple; Wallace, Amelia M.; Guetterman, Timothy C.; Schleicher, Lauren S.; Kennedy, Rebecca A.; Mangrulkar, Rajesh S.; Cleary, James F.; Marsella, Stacy C.; Becker, Daniel M.

    2016-01-01

    Objectives To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group’s experiences and learning preferences. Methods A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR’s intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. Secondary outcomes: student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. Results MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. Conclusions MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. Practice Implications MPathic-VR’s virtual human simulation offers an effective and engaging means of advanced communication training. PMID:27939846

  13. Summary and Comparison of Multiphase Streambed Scour Analysis at Selected Bridge Sites in Alaska

    USGS Publications Warehouse

    Conaway, Jeffrey S.

    2004-01-01

    The U.S. Geological Survey and the Alaska Department of Transportation and Public Facilities undertook a cooperative multiphase study of streambed scour at selected bridges in Alaska beginning in 1994. Of the 325 bridges analyzed for susceptibility to scour in the preliminary phase, 54 bridges were selected for a more intensive analysis that included site investigations. Cross-section geometry and hydraulic properties for each site in this study were determined from field surveys and bridge plans. Water-surface profiles were calculated for the 100- and 500-year floods using the Hydrologic Engineering Center?s River Analysis System and scour depths were calculated using methods recommended by the Federal Highway Administration. Computed contraction-scour depths for the 100- and 500-year recurrence-interval discharges exceeded 5 feet at six bridges, and pier-scour depths exceeded 10 feet at 24 bridges. Complex pier-scour computations were made at 10 locations where the computed contraction-scour depths would expose pier footings. Pressure scour was evaluated at three bridges where the modeled flood water-surface elevations intersected the bridge structure. Site investigation at the 54 scour-critical bridges was used to evaluate the effectiveness of the preliminary scour analysis. Values for channel-flow angle of attack and approach-channel width were estimated from bridge survey plans for the preliminary study and were measured during a site investigation for this study. These two variables account for changes in scour depths between the preliminary analysis and subsequent reanalysis for most sites. Site investigation is needed for best estimates of scour at bridges with survey plans that indicate a channel-flow angle of attack and for locations where survey plans did not include sufficient channel geometry upstream of the bridge.

  14. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    1996-01-01

    We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.

  15. Applicability of initial optimal maternal and fetal electrocardiogram combination vectors to subsequent recordings

    NASA Astrophysics Data System (ADS)

    Yan, Hua-Wen; Huang, Xiao-Lin; Zhao, Ying; Si, Jun-Feng; Liu, Tie-Bing; Liu, Hong-Xing

    2014-11-01

    A series of experiments are conducted to confirm whether the vectors calculated for an early section of a continuous non-invasive fetal electrocardiogram (fECG) recording can be directly applied to subsequent sections in order to reduce the computation required for real-time monitoring. Our results suggest that it is generally feasible to apply the initial optimal maternal and fetal ECG combination vectors to extract the fECG and maternal ECG in subsequent recorded sections.

  16. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  17. Creating CAD designs and performing their subsequent analysis using opensource solutions in Python

    NASA Astrophysics Data System (ADS)

    Iakushkin, Oleg O.; Sedova, Olga S.

    2018-01-01

    The paper discusses the concept of a system that encapsulates the transition from geometry building to strength tests. The solution we propose views the engineer as a programmer who is capable of coding the procedure for working with the modeli.e., to outline the necessary transformations and create cases for boundary conditions. We propose a prototype of such system. In our work, we used: Python programming language to create the program; Jupyter framework to create a single workspace visualization; pythonOCC library to implement CAD; FeniCS library to implement FEM; GMSH and VTK utilities. The prototype is launched on a platform which is a dynamically expandable multi-tenant cloud service providing users with all computing resources on demand. However, the system may be deployed locally for prototyping or work that does not involve resource-intensive computing. To make it possible, we used containerization, isolating the system in a Docker container.

  18. The role of mobile computed tomography in mass fatality incidents.

    PubMed

    Rutty, Guy N; Robinson, Claire E; BouHaidar, Ralph; Jeffery, Amanda J; Morgan, Bruno

    2007-11-01

    Mobile multi-detector computed tomography (MDCT) scanners are potentially available to temporary mortuaries and can be operational within 20 min of arrival. We describe, to our knowledge, the first use of mobile MDCT for a mass fatality incident. A mobile MDCT scanner attended the disaster mortuary after a five vehicle road traffic incident. Five out of six bodies were successfully imaged by MDCT in c. 15 min per body. Subsequent full radiological analysis took c. 1 h per case. The results were compared to the autopsy examinations. We discuss the advantages and disadvantages of imaging with mobile MDCT in relation to mass fatality work, illustrating the body pathway process, and its role in the identification of the pathology, personal effects, and health and safety hazards. We propose that the adoption of a single modality of mobile MDCT could replace the current use of multiple radiological sources within a mass fatality mortuary.

  19. SEAODV: A Security Enhanced AODV Routing Protocol for Wireless Mesh Networks

    NASA Astrophysics Data System (ADS)

    Li, Celia; Wang, Zhuang; Yang, Cungang

    In this paper, we propose a Security Enhanced AODV routing protocol (SEAODV) for wireless mesh networks (WMN). SEAODV employs Blom's key pre-distribution scheme to compute the pairwise transient key (PTK) through the flooding of enhanced HELLO message and subsequently uses the established PTK to distribute the group transient key (GTK). PTK and GTK authenticate unicast and broadcast routing messages respectively. In WMN, a unique PTK is shared by each pair of nodes, while GTK is shared secretly between the node and all its one-hop neighbours. A message authentication code (MAC) is attached as the extension to the original AODV routing message to guarantee the message's authenticity and integrity in a hop-by-hop fashion. Security analysis and performance evaluation show that SEAODV is more effective in preventing identified routing attacks and outperforms ARAN and SAODV in terms of computation cost and route acquisition latency.

  20. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  1. 32 CFR Appendix A to Part 292 - Uniform Agency Fees for Search and Duplication Under the Freedom of Information Act (as Amended)

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... a. above) for the computer/operator/programmer determining how to conduct and subsequently executing the search will be recorded as part of the computer search. c. Actual time spent travelling to a...

  2. Epileptic Seizure Forewarning by Nonlinear Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, L.M.

    2002-04-19

    This report describes work that was performed under a Cooperative Research and Development Agreement (CRADA) between UT-Battelle, LLC (Contractor) and a commercial participant, VIASYS Healthcare Inc. (formerly Nicolet Biomedical, Inc.). The Contractor has patented technology that forewarns of impending epileptic events via scalp electroencephalograph (EEG) data and successfully demonstrated this technology on 20 datasets from the Participant under pre-CRADA effort. This CRADA sought to bridge the gap between the Contractor's existing research-class software and a prototype medical device for subsequent commercialization by the Participant. The objectives of this CRADA were (1) development of a combination of existing computer hardware andmore » Contractor-patented software into a clinical process for warning of impending epileptic events in human patients, and (2) validation of the epilepsy warning methodology. This work modified the ORNL research-class FORTRAN for forewarning to run under a graphical user interface (GUI). The GUI-FORTRAN software subsequently was installed on desktop computers at five epilepsy monitoring units. The forewarning prototypes have run for more than one year without any hardware or software failures. This work also reported extensive analysis of model and EEG datasets to demonstrate the usefulness of the methodology. However, the Participant recently chose to stop work on the CRADA, due to a change in business priorities. Much work remains to convert the technology into a commercial clinical or ambulatory device for patient use, as discussed in App. H.« less

  3. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  4. Computer-assisted sperm analysis of fresh epididymal cat spermatozoa and the impact of cool storage (4 degrees C) on sperm quality.

    PubMed

    Filliers, M; Rijsselaere, T; Bossaert, P; De Causmaecker, V; Dewulf, J; Pope, C E; Van Soom, A

    2008-12-01

    Epididymal cat sperm is commonly used for in vitro fertilization. Because of the high variability in preparation protocols and methods of evaluation, sperm quality may vary considerably between experiments and laboratories. The aims of the present study were (1) to describe an epididymal sperm preparation protocol to produce clean, highly motile samples using density gradient centrifugation, (2) to provide reference values of computer-assisted semen analysis (CASA) parameters of fresh epididymal cat sperm after density gradient centrifugation and (3) to investigate the effect of cool storage on various spermatozoa characteristics. After slicing the epididymides, viable and motile sperm cells were isolated using Percoll centrifugation. Sperm motility parameters were subsequently assessed using CASA in experiment 1. In experiment 2, fresh (day 0) sperm samples were evaluated for motility parameters (HTR) and stained for assessment of acrosomal status (FITC-PSA), morphology (eosin/nigrosin (E/N)), membrane integrity (E/N and SYBR((R))14-PI) and DNA fragmentation (TUNEL). After addition of a Tris-glucose-citrate diluent containing 20% egg yolk, samples were cooled to 4 degrees C and reassessed on d1, d3, d5, d7 and d10. Cool storage impaired most motility and velocity parameters: MOT, PMOT, VAP, VSL, VCL, BCF, RAPID and the percentage of normal spermatozoa showed a decrease over time (P<0.05) as compared to fresh samples. In contrast, STR, ALH, membrane integrity, DNA fragmentation and the percentage of acrosome intact spermatozoa were not affected by cool storage. However, the influence of cool storage of cat spermatozoa on subsequent in vitro embryo development and quality after IVF requires further investigation.

  5. The multimedia computer for low-literacy patient education: a pilot project of cancer risk perceptions.

    PubMed

    Wofford, J L; Currin, D; Michielutte, R; Wofford, M M

    2001-04-20

    Inadequate reading literacy is a major barrier to better educating patients. Despite its high prevalence, practical solutions for detecting and overcoming low literacy in a busy clinical setting remain elusive. In exploring the potential role for the multimedia computer in improving office-based patient education, we compared the accuracy of information captured from audio-computer interviewing of patients with that obtained from subsequent verbal questioning. Adult medicine clinic, urban community health center Convenience sample of patients awaiting clinic appointments (n = 59). Exclusion criteria included obvious psychoneurologic impairment or primary language other than English. A multimedia computer presentation that used audio-computer interviewing with localized imagery and voices to elicit responses to 4 questions on prior computer use and cancer risk perceptions. Three patients refused or were unable to interact with the computer at all, and 3 patients required restarting the presentation from the beginning but ultimately completed the computerized survey. Of the 51 evaluable patients (72.5% African-American, 66.7% female, mean age 47.5 [+/- 18.1]), the mean time in the computer presentation was significantly longer with older age and with no prior computer use but did not differ by gender or race. Despite a high proportion of no prior computer use (60.8%), there was a high rate of agreement (88.7% overall) between audio-computer interviewing and subsequent verbal questioning. Audio-computer interviewing is feasible in this urban community health center. The computer offers a partial solution for overcoming literacy barriers inherent in written patient education materials and provides an efficient means of data collection that can be used to better target patients' educational needs.

  6. Automation of Precise Time Reference Stations (PTRS)

    NASA Astrophysics Data System (ADS)

    Wheeler, P. J.

    1985-04-01

    The U.S. Naval Observatory is presently engaged in a program of automating precise time stations (PTS) and precise time reference stations (PTBS) by using a versatile mini-computer controlled data acquisition system (DAS). The data acquisition system is configured to monitor locally available PTTI signals such as LORAN-C, OMEGA, and/or the Global Positioning System. In addition, the DAS performs local standard intercomparison. Computer telephone communications provide automatic data transfer to the Naval Observatory. Subsequently, after analysis of the data, results and information can be sent back to the precise time reference station to provide automatic control of remote station timing. The DAS configuration is designed around state of the art standard industrial high reliability modules. The system integration and software are standardized but allow considerable flexibility to satisfy special local requirements such as stability measurements, performance evaluation and printing of messages and certificates. The DAS operates completely independently and may be queried or controlled at any time with a computer or terminal device (control is protected for use by authorized personnel only). Such DAS equipped PTS are operational in Hawaii, California, Texas and Florida.

  7. Trainable multiscript orientation detection

    NASA Astrophysics Data System (ADS)

    Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.

    2010-01-01

    Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.

  8. Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.

  9. A Comparative Analysis of Computational Approaches to Relative Protein Quantification Using Peptide Peak Intensities in Label-free LC-MS Proteomics Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Melissa M.; Brown, Joseph N.; Gritsenko, Marina A.

    2013-02-01

    Liquid chromatography coupled with mass spectrometry (LC-MS) is widely used to identify and quantify peptides in complex biological samples. In particular, label-free shotgun proteomics is highly effective for the identification of peptides and subsequently obtaining a global protein profile of a sample. As a result, this approach is widely used for discovery studies. Typically, the objective of these discovery studies is to identify proteins that are affected by some condition of interest (e.g. disease, exposure). However, for complex biological samples, label-free LC-MS proteomics experiments measure peptides and do not directly yield protein quantities. Thus, protein quantification must be inferred frommore » one or more measured peptides. In recent years, many computational approaches to relative protein quantification of label-free LC-MS data have been published. In this review, we examine the most commonly employed quantification approaches to relative protein abundance from peak intensity values, evaluate their individual merits, and discuss challenges in the use of the various computational approaches.« less

  10. A combined three-dimensional in vitro–in silico approach to modelling bubble dynamics in decompression sickness

    PubMed Central

    Stride, E.; Cheema, U.

    2017-01-01

    The growth of bubbles within the body is widely believed to be the cause of decompression sickness (DCS). Dive computer algorithms that aim to prevent DCS by mathematically modelling bubble dynamics and tissue gas kinetics are challenging to validate. This is due to lack of understanding regarding the mechanism(s) leading from bubble formation to DCS. In this work, a biomimetic in vitro tissue phantom and a three-dimensional computational model, comprising a hyperelastic strain-energy density function to model tissue elasticity, were combined to investigate key areas of bubble dynamics. A sensitivity analysis indicated that the diffusion coefficient was the most influential material parameter. Comparison of computational and experimental data revealed the bubble surface's diffusion coefficient to be 30 times smaller than that in the bulk tissue and dependent on the bubble's surface area. The initial size, size distribution and proximity of bubbles within the tissue phantom were also shown to influence their subsequent dynamics highlighting the importance of modelling bubble nucleation and bubble–bubble interactions in order to develop more accurate dive algorithms. PMID:29263127

  11. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  12. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  13. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  14. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  15. The Relationship between Emotional Intelligence and Attitudes toward Computer-Based Instruction of Postsecondary Hospitality Students

    ERIC Educational Resources Information Center

    Behnke, Carl; Greenan, James P.

    2011-01-01

    This study examined the relationship between postsecondary students' emotional-social intelligence and attitudes toward computer-based instructional materials. Research indicated that emotions and emotional intelligence directly impact motivation, while instructional design has been shown to impact student attitudes and subsequent engagement with…

  16. Factors Influencing Cloud-Computing Technology Adoption in Developing Countries

    ERIC Educational Resources Information Center

    Hailu, Alemayehu

    2012-01-01

    Adoption of new technology has complicating components both from the selection, as well as decision-making criteria and process. Although new technology such as cloud computing provides great benefits especially to the developing countries, it has challenges that may complicate the selection decision and subsequent adoption process. This study…

  17. Counterfactual Thinking and Anticipated Emotions Enhance Performance in Computer Skills Training

    ERIC Educational Resources Information Center

    Chan, Amy Y. C.; Caputi, Peter; Jayasuriya, Rohan; Browne, Jessica L.

    2013-01-01

    The present study examined the relationship between novice learners' counterfactual thinking (i.e. generating "what if" and "if only" thoughts) about their initial training experience with a computer application and subsequent improvement in task performance. The role of anticipated emotions towards goal attainment in task…

  18. Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors

    DTIC Science & Technology

    2015-03-26

    methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods

  19. Designing a Network and Systems Computing Curriculum: The Stakeholders and the Issues

    ERIC Educational Resources Information Center

    Tan, Grace; Venables, Anne

    2010-01-01

    Since 2001, there has been a dramatic decline in Information Technology and Computer Science student enrolments worldwide. As a consequence, many institutions have evaluated their offerings and revamped their programs to include units designed to capture students' interests and increase subsequent enrolment. Likewise, at Victoria University the…

  20. E-Assessment Adaptation at a Military Vocational College: Student Perceptions

    ERIC Educational Resources Information Center

    Cigdem, Harun; Oncu, Semiral

    2015-01-01

    This survey study examines an assessment methodology through e-quizzes administered at a military vocational college and subsequent student perceptions in spring 2013 at the "Computer Networks" course. A total of 30 Computer Technologies and 261 Electronic and Communication Technologies students took three e-quizzes. Data were gathered…

  1. Development of Hybrid Computer Programs for AAFSS/COBRA/COIN Weapons Effectiveness Studies. Volume I. Simulating Aircraft Maneuvers and Weapon Firing Runs.

    DTIC Science & Technology

    for the game. Subsequent duels , flown with single armed escorts, calculated reduction in losses and damage states. For the study, hybrid computer...6) a duel between a ground weapon, armed escort, and formation of lift aircraft. (Author)

  2. SU-E-I-63: Quantitative Evaluation of the Effects of Orthopedic Metal Artifact Reduction (OMAR) Software On CT Images for Radiotherapy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jani, S

    Purpose: CT simulation for patients with metal implants can often be challenging due to artifacts that obscure tumor/target delineation and normal organ definition. Our objective was to evaluate the effectiveness of Orthopedic Metal Artifact Reduction (OMAR), a commercially available software, in reducing metal-induced artifacts and its effect on computed dose during treatment planning. Methods: CT images of water surrounding metallic cylindrical rods made of aluminum, copper and iron were studied in terms of Hounsfield Units (HU) spread. Metal-induced artifacts were characterized in terms of HU/Volume Histogram (HVH) using the Pinnacle treatment planning system. Effects of OMAR on enhancing our abilitymore » to delineate organs on CT and subsequent dose computation were examined in nine (9) patients with hip implants and two (2) patients with breast tissue expanders. Results: Our study characterized water at 1000 HU with a standard deviation (SD) of about 20 HU. The HVHs allowed us to evaluate how the presence of metal changed the HU spread. For example, introducing a 2.54 cm diameter copper rod in water increased the SD in HU of the surrounding water from 20 to 209, representing an increase in artifacts. Subsequent use of OMAR brought the SD down to 78. Aluminum produced least artifacts whereas Iron showed largest amount of artifacts. In general, an increase in kVp and mA during CT scanning showed better effectiveness of OMAR in reducing artifacts. Our dose analysis showed that some isodose contours shifted by several mm with OMAR but infrequently and were nonsignificant in planning process. Computed volumes of various dose levels showed <2% change. Conclusions: In our experience, OMAR software greatly reduced the metal-induced CT artifacts for the majority of patients with implants, thereby improving our ability to delineate tumor and surrounding organs. OMAR had a clinically negligible effect on computed dose within tissues. Partially funded by unrestricted educational grant from Philips.« less

  3. A general diagrammatic algorithm for contraction and subsequent simplification of second-quantized expressions.

    PubMed

    Bochevarov, Arteum D; Sherrill, C David

    2004-08-22

    We present a general computer algorithm to contract an arbitrary number of second-quantized expressions and simplify the obtained analytical result. The functions that perform these operations are a part of the program Nostromo which facilitates the handling and analysis of the complicated mathematical formulas which are often encountered in modern quantum-chemical models. In contrast to existing codes of this kind, Nostromo is based solely on the Goldstone-diagrammatic representation of algebraic expressions in Fock space and has capabilities to work with operators as well as scalars. Each Goldstone diagram is internally represented by a line of text which is easy to interpret and transform. The calculation of matrix elements does not exploit Wick's theorem in a direct way, but uses diagrammatic techniques to produce only nonzero terms. The identification of equivalent expressions and their subsequent factorization in the final result is performed easily by analyzing the topological structure of the diagrammatic expressions. (c) 2004 American Institute of Physics

  4. A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.

    PubMed

    Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen

    2015-01-01

    Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.

  5. Granular Rayleigh-Taylor instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinningland, Jan Ludvig; Johnsen, Oistein; Flekkoey, Eirik G.

    2009-06-18

    A granular instability driven by gravity is studied experimentally and numerically. The instability arises as grains fall in a closed Hele-Shaw cell where a layer of dense granular material is positioned above a layer of air. The initially flat front defined by the grains subsequently develops into a pattern of falling granular fingers separated by rising bubbles of air. A transient coarsening of the front is observed right from the start by a finger merging process. The coarsening is later stabilized by new fingers growing from the center of the rising bubbles. The structures are quantified by means of Fouriermore » analysis and quantitative agreement between experiment and computation is shown. This analysis also reveals scale invariance of the flow structures under overall change of spatial scale.« less

  6. From damage to discovery via virtual unwrapping: Reading the scroll from En-Gedi

    PubMed Central

    Seales, William Brent; Parker, Clifford Seth; Segal, Michael; Tov, Emanuel; Shor, Pnina; Porath, Yosef

    2016-01-01

    Computer imaging techniques are commonly used to preserve and share readable manuscripts, but capturing writing locked away in ancient, deteriorated documents poses an entirely different challenge. This software pipeline—referred to as “virtual unwrapping”—allows textual artifacts to be read completely and noninvasively. The systematic digital analysis of the extremely fragile En-Gedi scroll (the oldest Pentateuchal scroll in Hebrew outside of the Dead Sea Scrolls) reveals the writing hidden on its untouchable, disintegrating sheets. Our approach for recovering substantial ink-based text from a damaged object results in readable columns at such high quality that serious critical textual analysis can occur. Hence, this work creates a new pathway for subsequent textual discoveries buried within the confines of damaged materials. PMID:27679821

  7. Automated analysis of clonal cancer cells by intravital imaging

    PubMed Central

    Coffey, Sarah Earley; Giedt, Randy J; Weissleder, Ralph

    2013-01-01

    Longitudinal analyses of single cell lineages over prolonged periods have been challenging particularly in processes characterized by high cell turn-over such as inflammation, proliferation, or cancer. RGB marking has emerged as an elegant approach for enabling such investigations. However, methods for automated image analysis continue to be lacking. Here, to address this, we created a number of different multicolored poly- and monoclonal cancer cell lines for in vitro and in vivo use. To classify these cells in large scale data sets, we subsequently developed and tested an automated algorithm based on hue selection. Our results showed that this method allows accurate analyses at a fraction of the computational time required by more complex color classification methods. Moreover, the methodology should be broadly applicable to both in vitro and in vivo analyses. PMID:24349895

  8. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  9. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  10. New horizons in forensic radiology: the 60-second digital autopsy-full-body examination of a gunshot victim by multislice computed tomography.

    PubMed

    Thali, Michael J; Schweitzer, Wolf; Yen, Kathrin; Vock, Peter; Ozdoba, Christoph; Spielvogel, Elke; Dirnhofer, Richard

    2003-03-01

    The goal of this study was the full-body documentation of a gunshot wound victim with multislice helical computed tomography for subsequent comparison with the findings of the standard forensic autopsy. Complete volume data of the head, neck, and trunk were acquired by use of two acquisitions of less than 1 minute of total scanning time. Subsequent two-dimensional multiplanar reformations and three-dimensional shaded surface display reconstructions helped document the gunshot-created skull fractures and brain injuries, including the wound track, and the intracerebral bone fragments. Computed tomography also demonstrated intracardiac air embolism and pulmonary aspiration of blood resulting from bullet wound-related trauma. The "digital autopsy," even when postprocessing time was added, was more rapid than the classic forensic autopsy and, based on the nondestructive approach, offered certain advantages in comparison with the forensic autopsy.

  11. ANALYSIS AND MODELING OF TWO FLARE LOOPS OBSERVED BY AIA AND EIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y.; Ding, M. D.; Qiu, J.

    2012-10-10

    We analyze and model an M1.0 flare observed by SDO/AIA and Hinode/EIS to investigate how flare loops are heated and evolve subsequently. The flare is composed of two distinctive loop systems observed in extreme ultraviolet (EUV) images. The UV 1600 A emission at the feet of these loops exhibits a rapid rise, followed by enhanced emission in different EUV channels observed by the Atmospheric Imaging Assembly (AIA) and the EUV Imaging Spectrometer (EIS). Such behavior is indicative of impulsive energy deposit and the subsequent response in overlying coronal loops that evolve through different temperatures. Using the method we recently developed,more » we infer empirical heating functions from the rapid rise of the UV light curves for the two loop systems, respectively, treating them as two big loops with cross-sectional area of 5'' by 5'', and compute the plasma evolution in the loops using the EBTEL model. We compute the synthetic EUV light curves, which, with the limitation of the model, reasonably agree with observed light curves obtained in multiple AIA channels and EIS lines: they show the same evolution trend and their magnitudes are comparable by within a factor of two. Furthermore, we also compare the computed mean enthalpy flow velocity with the Doppler shift measurements by EIS during the decay phase of the two loops. Our results suggest that the two different loops with different heating functions as inferred from their footpoint UV emission, combined with their different lengths as measured from imaging observations, give rise to different coronal plasma evolution patterns captured both in the model and in observations.« less

  12. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  13. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    PubMed

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  14. Multiaxial Cyclic Thermoplasticity Analysis with Besseling's Subvolume Method

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1983-01-01

    A modification was formulated to Besseling's Subvolume Method to allow it to use multilinear stress-strain curves which are temperature dependent to perform cyclic thermoplasticity analyses. This method automotically reproduces certain aspects of real material behavior important in the analysis of Aircraft Gas Turbine Engine (AGTE) components. These include the Bauschinger effect, cross-hardening, and memory. This constitutive equation was implemented in a finite element computer program called CYANIDE. Subsequently, classical time dependent plasticity (creep) was added to the program. Since its inception, this program was assessed against laboratory and component testing and engine experience. The ability of this program to simulate AGTE material response characteristics was verified by this experience and its utility in providing data for life analyses was demonstrated. In this area of life analysis, the multiaxial thermoplasticity capabilities of the method have proved a match for the actual AGTE life experience.

  15. Comparison of magnetic resonance imaging and computed tomography in suspected lesions in the posterior cranial fossa.

    PubMed Central

    Teasdale, G. M.; Hadley, D. M.; Lawrence, A.; Bone, I.; Burton, H.; Grant, R.; Condon, B.; Macpherson, P.; Rowan, J.

    1989-01-01

    OBJECTIVE--To compare computed tomography and magnetic resonance imaging in investigating patients suspected of having a lesion in the posterior cranial fossa. DESIGN--Randomised allocation of newly referred patients to undergo either computed tomography or magnetic resonance imaging; the alternative investigation was performed subsequently only in response to a request from the referring doctor. SETTING--A regional neuroscience centre serving 2.7 million. PATIENTS--1020 Patients recruited between April 1986 and December 1987, all suspected by neurologists, neurosurgeons, or other specialists of having a lesion in the posterior fossa and referred for neuroradiology. The groups allocated to undergo computed tomography or magnetic resonance imaging were well matched in distributions of age, sex, specialty of referring doctor, investigation as an inpatient or an outpatient, suspected site of lesion, and presumed disease process; the referring doctor's confidence in the initial clinical diagnosis was also similar. INTERVENTIONS--After the patients had been imaged by either computed tomography or magnetic resonance (using a resistive magnet of 0.15 T) doctors were given the radiologist's report and a form asking if they considered that imaging with the alternative technique was necessary and, if so, why; it also asked for their current diagnoses and their confidence in them. MAIN OUTCOME MEASURES--Number of requests for the alternative method of investigation. Assessment of characteristics of patients for whom further imaging was requested and lesions that were suspected initially and how the results of the second imaging affected clinicians' and radiologists' opinions. RESULTS--Ninety three of the 501 patients who initially underwent computed tomography were referred subsequently for magnetic resonance imaging whereas only 28 of the 493 patients who initially underwent magnetic resonance imaging were referred subsequently for computed tomography. Over the study the number of patients referred for magnetic resonance imaging after computed tomography increased but requests for computed tomography after magnetic resonance imaging decreased. The reason that clinicians gave most commonly for requesting further imaging by magnetic resonance was that the results of the initial computed tomography failed to exclude their suspected diagnosis (64 patients). This was less common in patients investigated initially by magnetic resonance imaging (eight patients). Management of 28 patients (6%) imaged initially with computed tomography and 12 patients (2%) imaged initially with magnetic resonance was changed on the basis of the results of the alternative imaging. CONCLUSIONS--Magnetic resonance imaging provided doctors with the information required to manage patients suspected of having a lesion in the posterior fossa more commonly than computed tomography, but computed tomography alone was satisfactory in 80% of cases... PMID:2506965

  16. Assessment of protein set coherence using functional annotations

    PubMed Central

    Chagoyen, Monica; Carazo, Jose M; Pascual-Montano, Alberto

    2008-01-01

    Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoehler, M; McCallen, D; Noble, C

    The analysis, and subsequent retrofit, of concrete arch bridges during recent years has relied heavily on the use of computational simulation. For seismic analysis in particular, computer simulation, typically utilizing linear approximations of structural behavior, has become standard practice. This report presents the results of a comprehensive study of the significance of model sophistication (i.e. linear vs. nonlinear) and pertinent modeling assumptions on the dynamic response of concrete arch bridges. The study uses the Bixby Creek Bridge, located in California, as a case study. In addition to presenting general recommendations for analysis of this class of structures, this report providesmore » an independent evaluation of the proposed seismic retrofit for the Bixby Creek Bridge. Results from the study clearly illustrate a reduction of displacement drifts and redistribution of member forces brought on by the inclusion of material nonlinearity. The analyses demonstrate that accurate modeling of expansion joints, for the Bixby Creek Bridge in particular, is critical to achieve representative modal and transient behavior. The inclusion of near-field displacement pulses in ground motion records was shown to significantly increase demand on the relatively softer, longer period Bixby Creek Bridge arch. Stiffer, shorter period arches, however, are more likely susceptible to variable support motions arising from the canyon topography typical for this class of bridges.« less

  18. Nearfield Summary and Statistical Analysis of the Second AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Nemec, Marian

    2017-01-01

    A summary is provided for the Second AIAA Sonic Boom Workshop held 8-9 January 2017 in conjunction with AIAA SciTech 2017. The workshop used three required models of increasing complexity: an axisymmetric body, a wing body, and a complete configuration with flow-through nacelle. An optional complete configuration with propulsion boundary conditions is also provided. These models are designed with similar nearfield signatures to isolate geometry and shock/expansion interaction effects. Eleven international participant groups submitted nearfield signatures with forces, pitching moment, and iterative convergence norms. Statistics and grid convergence of these nearfield signatures are presented. These submissions are propagated to the ground, and noise levels are computed. This allows the grid convergence and the statistical distribution of a noise level to be computed. While progress is documented since the first workshop, improvement to the analysis methods for a possible subsequent workshop are provided. The complete configuration with flow-through nacelle showed the most dramatic improvement between the two workshops. The current workshop cases are more relevant to vehicles with lower loudness and have the potential for lower annoyance than the first workshop cases. The models for this workshop with quieter ground noise levels than the first workshop exposed weaknesses in analysis, particularly in convective discretization.

  19. Microphone array measurement system for analysis of directional and spatial variations of sound fields.

    PubMed

    Gover, Bradford N; Ryan, James G; Stinson, Michael R

    2002-11-01

    A measurement system has been developed that is capable of analyzing the directional and spatial variations in a reverberant sound field. A spherical, 32-element array of microphones is used to generate a narrow beam that is steered in 60 directions. Using an omnidirectional loudspeaker as excitation, the sound pressure arriving from each steering direction is measured as a function of time, in the form of pressure impulse responses. By subsequent analysis of these responses, the variation of arriving energy with direction is studied. The directional diffusion and directivity index of the arriving sound can be computed, as can the energy decay rate in each direction. An analysis of the 32 microphone responses themselves allows computation of the point-to-point variation of reverberation time and of sound pressure level, as well as the spatial cross-correlation coefficient, over the extent of the array. The system has been validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively from the measures and qualitatively from plots of the arriving energy versus direction. It is anticipated that the system will be of value in evaluating the directional distribution of arriving energy and the degree and diffuseness of sound fields in rooms.

  20. Accuracy and precision of computer-assisted analysis of bone density via conventional and digital radiography in relation to dual-energy x-ray absorptiometry.

    PubMed

    Vaccaro, Calogero; Busetto, Roberto; Bernardini, Daniele; Anselmi, Carlo; Zotti, Alessandro

    2012-03-01

    To evaluate the precision and accuracy of assessing bone mineral density (BMD) by use of mean gray value (MGV) on digitalized and digital images of conventional and digital radiographs, respectively, of ex vivo bovine and equine bone specimens in relation to the gold-standard technique of dual-energy x-ray absorptiometry (DEXA). Left and right metatarsal bones from 11 beef cattle and right femurs from 2 horses. Bovine specimens were imaged by use of conventional radiography, whereas equine specimens were imaged by use of computed radiography (digital radiography). Each specimen was subsequently scanned by use of the same DEXA equipment. The BMD values resulting from each DEXA scan were paired with the MGVs obtained by use of software on the corresponding digitalized or digital radiographic image. The MGV analysis of digitalized and digital x-ray images was a precise (coefficient of variation, 0.1 and 0.09, respectively) and highly accurate method for assessing BMD, compared with DEXA (correlation coefficient, 0.910 and 0.937 for conventional and digital radiography, respectively). The high correlation between MGV and BMD indicated that MGV analysis may be a reliable alternative to DEXA in assessing radiographic bone density. This may provide a new, inexpensive, and readily available estimate of BMD.

  1. Mobile Diagnostics Based on Motion? A Close Look at Motility Patterns in the Schistosome Life Cycle

    PubMed Central

    Linder, Ewert; Varjo, Sami; Thors, Cecilia

    2016-01-01

    Imaging at high resolution and subsequent image analysis with modified mobile phones have the potential to solve problems related to microscopy-based diagnostics of parasitic infections in many endemic regions. Diagnostics using the computing power of “smartphones” is not restricted by limited expertise or limitations set by visual perception of a microscopist. Thus diagnostics currently almost exclusively dependent on recognition of morphological features of pathogenic organisms could be based on additional properties, such as motility characteristics recognizable by computer vision. Of special interest are infectious larval stages and “micro swimmers” of e.g., the schistosome life cycle, which infect the intermediate and definitive hosts, respectively. The ciliated miracidium, emerges from the excreted egg upon its contact with water. This means that for diagnostics, recognition of a swimming miracidium is equivalent to recognition of an egg. The motility pattern of miracidia could be defined by computer vision and used as a diagnostic criterion. To develop motility pattern-based diagnostics of schistosomiasis using simple imaging devices, we analyzed Paramecium as a model for the schistosome miracidium. As a model for invasive nematodes, such as strongyloids and filaria, we examined a different type of motility in the apathogenic nematode Turbatrix, the “vinegar eel.” The results of motion time and frequency analysis suggest that target motility may be expressed as specific spectrograms serving as “diagnostic fingerprints.” PMID:27322330

  2. Incremental Lexical Learning in Speech Production: A Computational Model and Empirical Evaluation

    ERIC Educational Resources Information Center

    Oppenheim, Gary Michael

    2011-01-01

    Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have…

  3. Adventitious sounds identification and extraction using temporal-spectral dominance-based features.

    PubMed

    Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook

    2011-11-01

    Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method.

  4. Aberrant functional connectivity for diagnosis of major depressive disorder: a discriminant analysis.

    PubMed

    Cao, Longlong; Guo, Shuixia; Xue, Zhimin; Hu, Yong; Liu, Haihong; Mwansisya, Tumbwene E; Pu, Weidan; Yang, Bo; Liu, Chang; Feng, Jianfeng; Chen, Eric Y H; Liu, Zhening

    2014-02-01

    Aberrant brain functional connectivity patterns have been reported in major depressive disorder (MDD). It is unknown whether they can be used in discriminant analysis for diagnosis of MDD. In the present study we examined the efficiency of discriminant analysis of MDD by individualized computer-assisted diagnosis. Based on resting-state functional magnetic resonance imaging data, a new approach was adopted to investigate functional connectivity changes in 39 MDD patients and 37 well-matched healthy controls. By using the proposed feature selection method, we identified significant altered functional connections in patients. They were subsequently applied to our analysis as discriminant features using a support vector machine classification method. Furthermore, the relative contribution of functional connectivity was estimated. After subset selection of high-dimension features, the support vector machine classifier reached up to approximately 84% with leave-one-out training during the discrimination process. Through summarizing the classification contribution of functional connectivities, we obtained four obvious contribution modules: inferior orbitofrontal module, supramarginal gyrus module, inferior parietal lobule-posterior cingulated gyrus module and middle temporal gyrus-inferior temporal gyrus module. The experimental results demonstrated that the proposed method is effective in discriminating MDD patients from healthy controls. Functional connectivities might be useful as new biomarkers to assist clinicians in computer auxiliary diagnosis of MDD. © 2013 The Authors. Psychiatry and Clinical Neurosciences © 2013 Japanese Society of Psychiatry and Neurology.

  5. Green's function solution to heat transfer of a transparent gas through a tube

    NASA Technical Reports Server (NTRS)

    Frankel, J. I.

    1989-01-01

    A heat transfer analysis of a transparent gas flowing through a circular tube of finite thickness is presented. This study includes the effects of wall conduction, internal radiative exchange, and convective heat transfer. The natural mathematical formulation produces a nonlinear, integrodifferential equation governing the wall temperature and an ordinary differential equation describing the gas temperature. This investigation proposes to convert the original system of equations into an equivalent system of integral equations. The Green's function method permits the conversion of an integrodifferential equation into a pure integral equation. The proposed integral formulation and subsequent computational procedure are shown to be stable and accurate.

  6. A Simulation Based Approach for Contingency Planning for Aircraft Turnaround Operation System Activities in Airline Hubs

    NASA Technical Reports Server (NTRS)

    Adeleye, Sanya; Chung, Christopher

    2006-01-01

    Commercial aircraft undergo a significant number of maintenance and logistical activities during the turnaround operation at the departure gate. By analyzing the sequencing of these activities, more effective turnaround contingency plans may be developed for logistical and maintenance disruptions. Turnaround contingency plans are particularly important as any kind of delay in a hub based system may cascade into further delays with subsequent connections. The contingency sequencing of the maintenance and logistical turnaround activities were analyzed using a combined network and computer simulation modeling approach. Experimental analysis of both current and alternative policies provides a framework to aid in more effective tactical decision making.

  7. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    PubMed

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  8. Three-Dimensional Viscous Alternating Direction Implicit Algorithm and Strategies for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.

  9. Computational Model of the Fathead Minnow Hypothalamic-Pituitary-Gonadal Axis: Incorporating Protein Synthesis in Improving Predictability of Responses to Endocrine Active Chemicals

    EPA Science Inventory

    There is international concern about chemicals that alter endocrine system function in humans and/or wildlife and subsequently cause adverse effects. We previously developed a mechanistic computational model of the hypothalamic-pituitary-gonadal (HPG) axis in female fathead minno...

  10. Computing Robust, Bootstrap-Adjusted Fit Indices for Use with Nonnormal Data

    ERIC Educational Resources Information Center

    Walker, David A.; Smith, Thomas J.

    2017-01-01

    Nonnormality of data presents unique challenges for researchers who wish to carry out structural equation modeling. The subsequent SPSS syntax program computes bootstrap-adjusted fit indices (comparative fit index, Tucker-Lewis index, incremental fit index, and root mean square error of approximation) that adjust for nonnormality, along with the…

  11. Critical Success Factors for E-Learning and Institutional Change--Some Organisational Perspectives on Campus-Wide E-Learning

    ERIC Educational Resources Information Center

    White, Su

    2007-01-01

    Computer technology has been harnessed for education in UK universities ever since the first computers for research were installed at 10 selected sites in 1957. Subsequently, real costs have fallen dramatically. Processing power has increased; network and communications infrastructure has proliferated, and information has become unimaginably…

  12. Critical Analysis of Cluster Models and Exchange-Correlation Functionals for Calculating Magnetic Shielding in Molecular Solids.

    PubMed

    Holmes, Sean T; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil

    2015-11-10

    Calculations of the principal components of magnetic-shielding tensors in crystalline solids require the inclusion of the effects of lattice structure on the local electronic environment to obtain significant agreement with experimental NMR measurements. We assess periodic (GIPAW) and GIAO/symmetry-adapted cluster (SAC) models for computing magnetic-shielding tensors by calculations on a test set containing 72 insulating molecular solids, with a total of 393 principal components of chemical-shift tensors from 13C, 15N, 19F, and 31P sites. When clusters are carefully designed to represent the local solid-state environment and when periodic calculations include sufficient variability, both methods predict magnetic-shielding tensors that agree well with experimental chemical-shift values, demonstrating the correspondence of the two computational techniques. At the basis-set limit, we find that the small differences in the computed values have no statistical significance for three of the four nuclides considered. Subsequently, we explore the effects of additional DFT methods available only with the GIAO/cluster approach, particularly the use of hybrid-GGA functionals, meta-GGA functionals, and hybrid meta-GGA functionals that demonstrate improved agreement in calculations on symmetry-adapted clusters. We demonstrate that meta-GGA functionals improve computed NMR parameters over those obtained by GGA functionals in all cases, and that hybrid functionals improve computed results over the respective pure DFT functional for all nuclides except 15N.

  13. Investigation of Patient-Specific Cerebral Aneurysm using Volumetric PIV, CFD, and In Vitro PC-MRI

    NASA Astrophysics Data System (ADS)

    Brindise, Melissa; Dickerhoff, Ben; Saloner, David; Rayz, Vitaliy; Vlachos, Pavlos

    2017-11-01

    4D PC-MRI is a modality capable of providing time-resolved velocity fields in cerebral aneurysms in vivo. The MRI-measured velocities and subsequent hemodynamic parameters such as wall shear stress, and oscillatory shear index, can help neurosurgeons decide a course of treatment for a patient, e.g. whether to treat or monitor the aneurysm. However, low spatiotemporal resolution, limited velocity dynamic range, and inherent noise of PC-MRI velocity fields can have a notable effect on subsequent calculations, and should be investigated. In this work, we compare velocity fields obtained with 4D PC-MRI, computational fluid dynamics (CFD) and volumetric particle image velocimetry (PIV), using a patient-specific model of a basilar tip aneurysm. The same in vitro model is used for all three modalities and flow input parameters are controlled. In vivo, PC-MRI data was also acquired for this patient and used for comparison. Specifically, we investigate differences in the resulting velocity fields and biases in subsequent calculations. Further, we explore the effect these errors may have on assessment of the aneurysm progression and seek to develop corrective algorithms and other methodologies that can be used to improve the accuracy of hemodynamic analysis in clinical setting.

  14. Concept mapping as an approach for expert-guided model building: The example of health literacy.

    PubMed

    Soellner, Renate; Lenartz, Norbert; Rudinger, Georg

    2017-02-01

    Concept mapping served as the starting point for the aim of capturing the comprehensive structure of the construct of 'health literacy.' Ideas about health literacy were generated by 99 experts and resulted in 105 statements that were subsequently organized by 27 experts in an unstructured card sorting. Multidimensional scaling was applied to the sorting data and a two and three-dimensional solution was computed. The three dimensional solution was used in subsequent cluster analysis and resulted in a concept map of nine "clusters": (1) self-regulation, (2) self-perception, (3) proactive approach to health, (4) basic literacy and numeracy skills, (5) information appraisal, (6) information search, (7) health care system knowledge and acting, (8) communication and cooperation, and (9) beneficial personality traits. Subsequently, this concept map served as a starting point for developing a "qualitative" structural model of health literacy and a questionnaire for the measurement of health literacy. On the basis of questionnaire data, a "quantitative" structural model was created by first applying exploratory factor analyses (EFA) and then cross-validating the model with confirmatory factor analyses (CFA). Concept mapping proved to be a highly valuable tool for the process of model building up to translational research in the "real world". Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Fluid-Structure Interaction Modeling of the Reefed Stages of the Orion Spacecraft Main Parachutes

    NASA Astrophysics Data System (ADS)

    Boswell, Cody W.

    Spacecraft parachutes are typically used in multiple stages, starting with a "reefed" stage where a cable along the parachute skirt constrains the diameter to be less than the diameter in the subsequent stage. After a certain period of time during the descent, the cable is cut and the parachute "disreefs" (i.e. expands) to the next stage. Computing the parachute shape at the reefed stage and fluid-structure interaction (FSI) modeling during the disreefing involve computational challenges beyond those we have in FSI modeling of fully-open spacecraft parachutes. These additional challenges are created by the increased geometric complexities and by the rapid changes in the parachute geometry. The computational challenges are further increased because of the added geometric porosity of the latest design, where the "windows" created by the removal of panels and the wider gaps created by the removal of sails compound the geometric and flow complexity. Orion spacecraft main parachutes will have three stages, with computation of the Stage 1 shape and FSI modeling of disreefing from Stage 1 to Stage 2 being the most challenging. We present the special modeling techniques we devised to address the computational challenges and the results from the computations carried out. We also present the methods we devised to calculate for a parachute gore the radius of curvature in the circumferential direction. The curvature values are intended for quick and simple engineering analysis in estimating the structural stresses.

  16. Early classification of pathological heartbeats on wireless body sensor nodes.

    PubMed

    Braojos, Rubén; Beretta, Ivan; Ansaloni, Giovanni; Atienza, David

    2014-11-27

    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections.

  17. Early Classification of Pathological Heartbeats on Wireless Body Sensor Nodes

    PubMed Central

    Braojos, Rubén; Beretta, Ivan; Ansaloni, Giovanni; Atienza, David

    2014-01-01

    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections. PMID:25436654

  18. Automatic scanning and measuring using POLLY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, T.

    1993-07-01

    The HPD and PEPR automatic measuring systems, which have been described by B. Powell and I. Pless at this conference, were developed in the 1960`s to be used for what would now be called {open_quotes}batch processing.{close_quotes} That is, an entire reel of bubble chamber film containing interesting events whose tracks had been rough-digitized would be processed in an extended run by a dedicated computer/precision digitizer hardware system, with no human intervention. Then, at a later time, events for which the precision measurement did not appear to be successful would be handled with some type of {open_quotes}fixup{close_quotes} station or process. Bymore » contrast, the POLLY system included from the start, not only a computer and a precision CRT measuring device, but also a human operator who could have convenient two-way interactions with the computer and could also view the picture directly. Inclusion of a human as a key part of the system had some important beneficial effects, as has been described in the original papers. In this note the author summarizes those effects, and also points out connections between the POLLY system philosophy and subsequent developments in both high energy physics data analysis and computing systems.« less

  19. An in silico method to identify computer-based protocols worthy of clinical study: An insulin infusion protocol use case

    PubMed Central

    Wong, Anthony F; Pielmeier, Ulrike; Haug, Peter J; Andreassen, Steen

    2016-01-01

    Objective Develop an efficient non-clinical method for identifying promising computer-based protocols for clinical study. An in silico comparison can provide information that informs the decision to proceed to a clinical trial. The authors compared two existing computer-based insulin infusion protocols: eProtocol-insulin from Utah, USA, and Glucosafe from Denmark. Materials and Methods The authors used eProtocol-insulin to manage intensive care unit (ICU) hyperglycemia with intravenous (IV) insulin from 2004 to 2010. Recommendations accepted by the bedside clinicians directly link the subsequent blood glucose values to eProtocol-insulin recommendations and provide a unique clinical database. The authors retrospectively compared in silico 18 984 eProtocol-insulin continuous IV insulin infusion rate recommendations from 408 ICU patients with those of Glucosafe, the candidate computer-based protocol. The subsequent blood glucose measurement value (low, on target, high) was used to identify if the insulin recommendation was too high, on target, or too low. Results Glucosafe consistently provided more favorable continuous IV insulin infusion rate recommendations than eProtocol-insulin for on target (64% of comparisons), low (80% of comparisons), or high (70% of comparisons) blood glucose. Aggregated eProtocol-insulin and Glucosafe continuous IV insulin infusion rates were clinically similar though statistically significantly different (Wilcoxon signed rank test P = .01). In contrast, when stratified by low, on target, or high subsequent blood glucose measurement, insulin infusion rates from eProtocol-insulin and Glucosafe were statistically significantly different (Wilcoxon signed rank test, P < .001), and clinically different. Discussion This in silico comparison appears to be an efficient nonclinical method for identifying promising computer-based protocols. Conclusion Preclinical in silico comparison analytical framework allows rapid and inexpensive identification of computer-based protocol care strategies that justify expensive and burdensome clinical trials. PMID:26228765

  20. Challenge Paper: Validation of Forensic Techniques for Criminal Prosecution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erbacher, Robert F.; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.

    2007-04-10

    Abstract: As in many domains, there is increasing agreement in the user and research community that digital forensics analysts would benefit from the extension, development and application of advanced techniques in performing large scale and heterogeneous data analysis. Modern digital forensics analysis of cyber-crimes and cyber-enabled crimes often requires scrutiny of massive amounts of data. For example, a case involving network compromise across multiple enterprises might require forensic analysis of numerous sets of network logs and computer hard drives, potentially involving 100?s of gigabytes of heterogeneous data, or even terabytes or petabytes of data. Also, the goal for forensic analysismore » is to not only determine whether the illicit activity being considered is taking place, but also to identify the source of the activity and the full extent of the compromise or impact on the local network. Even after this analysis, there remains the challenge of using the results in subsequent criminal and civil processes.« less

  1. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less

  2. Computational Fluid Dynamics Analysis of the Effect of Plaques in the Left Coronary Artery

    PubMed Central

    Chaichana, Thanapong; Sun, Zhonghua; Jewkes, James

    2012-01-01

    This study was to investigate the hemodynamic effect of simulated plaques in left coronary artery models, which were generated from a sample patient's data. Plaques were simulated and placed at the left main stem and the left anterior descending (LAD) to produce at least 60% coronary stenosis. Computational fluid dynamics analysis was performed to simulate realistic physiological conditions that reflect the in vivo cardiac hemodynamics, and comparison of wall shear stress (WSS) between Newtonian and non-Newtonian fluid models was performed. The pressure gradient (PSG) and flow velocities in the left coronary artery were measured and compared in the left coronary models with and without presence of plaques during cardiac cycle. Our results showed that the highest PSG was observed in stenotic regions caused by the plaques. Low flow velocity areas were found at postplaque locations in the left circumflex, LAD, and bifurcation. WSS at the stenotic locations was similar between the non-Newtonian and Newtonian models although some more details were observed with non-Newtonian model. There is a direct correlation between coronary plaques and subsequent hemodynamic changes, based on the simulation of plaques in the realistic coronary models. PMID:22400051

  3. A proposal for a computer-based framework of support for public health in the management of biological incidents: the Czech Republic experience.

    PubMed

    Bures, Vladimír; Otcenásková, Tereza; Cech, Pavel; Antos, Karel

    2012-11-01

    Biological incidents jeopardising public health require decision-making that consists of one dominant feature: complexity. Therefore, public health decision-makers necessitate appropriate support. Based on the analogy with business intelligence (BI) principles, the contextual analysis of the environment and available data resources, and conceptual modelling within systems and knowledge engineering, this paper proposes a general framework for computer-based decision support in the case of a biological incident. At the outset, the analysis of potential inputs to the framework is conducted and several resources such as demographic information, strategic documents, environmental characteristics, agent descriptors and surveillance systems are considered. Consequently, three prototypes were developed, tested and evaluated by a group of experts. Their selection was based on the overall framework scheme. Subsequently, an ontology prototype linked with an inference engine, multi-agent-based model focusing on the simulation of an environment, and expert-system prototypes were created. All prototypes proved to be utilisable support tools for decision-making in the field of public health. Nevertheless, the research revealed further issues and challenges that might be investigated by both public health focused researchers and practitioners.

  4. Computational analysis of the effectiveness of blood flushing with saline injection from an intravascular diagnostic catheter

    PubMed Central

    Ghata, Narugopal; Aldredge, Ralph C.; Bec, Julien; Marcu, Laura

    2015-01-01

    SUMMARY Optical techniques including fluorescence lifetime spectroscopy have demonstrated potential as a tool for study and diagnosis of arterial vessel pathologies. However, their application in the intravascular diagnostic procedures has been hampered by the presence of blood hemoglobin that affects the light delivery to and the collection from the vessel wall. We report a computational fluid dynamics model that allows for the optimization of blood flushing parameters in a manner that minimizes the amount of saline needed to clear the optical field of view and reduces any adverse effects caused by the external saline jet. A 3D turbulence (k−ω) model was employed for Eulerian–Eulerian two-phase flow to simulate the flow inside and around a side-viewing fiber-optic catheter. Current analysis demonstrates the effects of various parameters including infusion and blood flow rates, vessel diameters, and pulsatile nature of blood flow on the flow structure around the catheter tip. The results from this study can be utilized in determining the optimal flushing rate for given vessel diameter, blood flow rate, and maximum wall shear stress that the vessel wall can sustain and subsequently in optimizing the design parameters of optical-based intravascular catheters. PMID:24953876

  5. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.

  6. Exploring Issues about Computational Thinking in Higher Education

    ERIC Educational Resources Information Center

    Czerkawski, Betul C.; Lyman, Eugene W., III

    2015-01-01

    The term computational thinking (CT) has been in academic discourse for decades, but gained new currency in 2006, when Jeanette Wing used it to describe a set of thinking skills that students in all fields may require in order to succeed. Wing's initial article and subsequent writings on CT have been broadly influential; experts in…

  7. Theta synchronization networks emerge during human object-place memory encoding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2007-03-26

    Recent rodent hippocampus studies have suggested that theta rhythm-dependent neural dynamics ('theta phase precession') is essential for an on-line memory formation. A computational study indicated that the phase precession enables a human object-place association memory with voluntary eye movements, although it is still an open question whether the human brain uses the dynamics. Here we elucidated subsequent memory-correlated activities in human scalp electroencephalography in an object-place association memory designed according the former computational study. Our results successfully demonstrated that subsequent memory recall is characterized by an increase in theta power and coherence, and further, that multiple theta synchronization networks emerge. These findings suggest the human theta dynamics in common with rodents in episodic memory formation.

  8. Visual traffic jam analysis based on trajectory data.

    PubMed

    Wang, Zuchao; Lu, Min; Yuan, Xiaoru; Zhang, Junping; van de Wetering, Huub

    2013-12-01

    In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.

  9. rSalvador: An R Package for the Fluctuation Experiment

    PubMed Central

    Zheng, Qi

    2017-01-01

    The past few years have seen a surge of novel applications of the Luria-Delbrück fluctuation assay protocol in bacterial research. Appropriate analysis of fluctuation assay data often requires computational methods that are unavailable in the popular web tool FALCOR. This paper introduces an R package named rSalvador to bring improvements to the field. The paper focuses on rSalvador’s capabilities to alleviate three kinds of problems found in recent investigations: (i) resorting to partial plating without properly accounting for the effects of partial plating; (ii) conducting attendant fitness assays without incorporating mutants’ relative fitness in subsequent data analysis; and (iii) comparing mutation rates using methods that are in general inapplicable to fluctuation assay data. In addition, the paper touches on rSalvador’s capabilities to estimate sample size and the difficulties related to parameter nonidentifiability. PMID:29084818

  10. Ion-absorption band analysis for the discrimination of iron-rich zones. [Nevada

    NASA Technical Reports Server (NTRS)

    Rowan, L. C. (Principal Investigator); Wetlaufer, P. H.

    1974-01-01

    The author has identified the following significant results. A technique which combines digital computer processing and color composition was devised for detecting hydrothermally altered areas and for discriminating among many rock types in an area in south-central Nevada. Subtle spectral reflectance differences among the rock types are enhanced by ratioing and contrast-stretching MSS radiance values for form ratio images which subsequently are displayed in color-ratio composites. Landform analysis of Nevada shows that linear features compiled without respect to length results in approximately 25 percent coincidence with mapped faults. About 80 percent of the major lineaments coincides with mapped faults, and substantial extension of locally mapped faults is commonly indicated. Seven major lineament systems appear to be old zones of crustal weakness which have provided preferred conduits for rising magma through periodic reactivation.

  11. Methods for assessing the stability of slopes during earthquakes-A retrospective

    USGS Publications Warehouse

    Jibson, R.W.

    2011-01-01

    During the twentieth century, several methods to assess the stability of slopes during earthquakes were developed. Pseudostatic analysis was the earliest method; it involved simply adding a permanent body force representing the earthquake shaking to a static limit-equilibrium analysis. Stress-deformation analysis, a later development, involved much more complex modeling of slopes using a mesh in which the internal stresses and strains within elements are computed based on the applied external loads, including gravity and seismic loads. Stress-deformation analysis provided the most realistic model of slope behavior, but it is very complex and requires a high density of high-quality soil-property data as well as an accurate model of soil behavior. In 1965, Newmark developed a method that effectively bridges the gap between these two types of analysis. His sliding-block model is easy to apply and provides a useful index of co-seismic slope performance. Subsequent modifications to sliding-block analysis have made it applicable to a wider range of landslide types. Sliding-block analysis provides perhaps the greatest utility of all the types of analysis. It is far easier to apply than stress-deformation analysis, and it yields much more useful information than does pseudostatic analysis. ?? 2010.

  12. AGSuite: Software to conduct feature analysis of artificial grammar learning performance.

    PubMed

    Cook, Matthew T; Chubala, Chrissy M; Jamieson, Randall K

    2017-10-01

    To simplify the problem of studying how people learn natural language, researchers use the artificial grammar learning (AGL) task. In this task, participants study letter strings constructed according to the rules of an artificial grammar and subsequently attempt to discriminate grammatical from ungrammatical test strings. Although the data from these experiments are usually analyzed by comparing the mean discrimination performance between experimental conditions, this practice discards information about the individual items and participants that could otherwise help uncover the particular features of strings associated with grammaticality judgments. However, feature analysis is tedious to compute, often complicated, and ill-defined in the literature. Moreover, the data violate the assumption of independence underlying standard linear regression models, leading to Type I error inflation. To solve these problems, we present AGSuite, a free Shiny application for researchers studying AGL. The suite's intuitive Web-based user interface allows researchers to generate strings from a database of published grammars, compute feature measures (e.g., Levenshtein distance) for each letter string, and conduct a feature analysis on the strings using linear mixed effects (LME) analyses. The LME analysis solves the inflation of Type I errors that afflicts more common methods of repeated measures regression analysis. Finally, the software can generate a number of graphical representations of the data to support an accurate interpretation of results. We hope the ease and availability of these tools will encourage researchers to take full advantage of item-level variance in their datasets in the study of AGL. We moreover discuss the broader applicability of the tools for researchers looking to conduct feature analysis in any field.

  13. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  14. Effects of a history of differential reinforcement on preference for choice.

    PubMed

    Karsina, Allen; Thompson, Rachel H; Rodriguez, Nicole M

    2011-03-01

    The effects of a history of differential reinforcement for selecting a free-choice versus a restricted-choice stimulus arrangement on the subsequent responding of 7 undergraduates in a computer-based game of chance were examined using a concurrent-chains arrangement and a multiple-baseline-across-participants design. In the free-choice arrangement, participants selected three numbers, in any order, from an array of eight numbers presented on the computer screen. In the restricted-choice arrangement, participants selected the order of three numbers preselected from the array of eight by a computer program. In initial sessions, all participants demonstrated no consistent preference or preference for restricted choice. Differential reinforcement of free-choice selections resulted in increased preference for free choice immediately and in subsequent sessions in the absence of programmed differential outcomes. For 5 participants, changes in preference for choice were both robust and lasting, suggesting that a history of differential reinforcement for choice may affect preference for choice.

  15. Effects of a History of Differential Reinforcement on Preference for Choice

    PubMed Central

    Karsina, Allen; Thompson, Rachel H; Rodriguez, Nicole M

    2011-01-01

    The effects of a history of differential reinforcement for selecting a free-choice versus a restricted-choice stimulus arrangement on the subsequent responding of 7 undergraduates in a computer-based game of chance were examined using a concurrent-chains arrangement and a multiple-baseline-across-participants design. In the free-choice arrangement, participants selected three numbers, in any order, from an array of eight numbers presented on the computer screen. In the restricted-choice arrangement, participants selected the order of three numbers preselected from the array of eight by a computer program. In initial sessions, all participants demonstrated no consistent preference or preference for restricted choice. Differential reinforcement of free-choice selections resulted in increased preference for free choice immediately and in subsequent sessions in the absence of programmed differential outcomes. For 5 participants, changes in preference for choice were both robust and lasting, suggesting that a history of differential reinforcement for choice may affect preference for choice. PMID:21541125

  16. Bilateral Malar Reconstruction Using Patient-Specific Polyether Ether Ketone Implants in Treacher-Collins Syndrome Patients With Absent Zygomas.

    PubMed

    Sainsbury, David C G; George, Alan; Forrest, Christopher R; Phillips, John H

    2017-03-01

    The authors performed bilateral malar reconstruction using polyether ether ketone implants in 3 patients with Treacher-Collins syndrome with absent, as opposed to hypoplastic, zygomata. These patient-specific implants were fabricated using computed-aided design software reformatted from three-dimensional bony preoperative computed tomography images. The first time the authors performed this procedure the implant compressed the globe resulting in temporary anisocoria that was quickly recognized intraoperatively. The implant was immediately removed and the patient made a full-recovery with no ocular disturbance. The computer-aided design and manufacturing process was adjusted to include periorbital soft-tissue boundaries to aid in contouring the new implants. The same patient, and 2 further patients, subsequently underwent malar reconstruction using this soft tissue periorbital boundary fabrication process with an additional 2 mm relief removed from the implant's orbital surface. These subsequent procedures were performed without complication and with pleasing aesthetic results. The authors describe their experience and the salutary lessons learnt.

  17. Computational process to study the wave propagation In a non-linear medium by quasi- linearization

    NASA Astrophysics Data System (ADS)

    Sharath Babu, K.; Venkata Brammam, J.; Baby Rani, CH

    2018-03-01

    Two objects having distinct velocities come into contact an impact can occur. The impact study i.e., in the displacement of the objects after the impact, the impact force is function of time‘t’ which is behaves similar to compression force. The impact tenure is very short so impulses must be generated subsequently high stresses are generated. In this work we are examined the wave propagation inside the object after collision and measured the object non-linear behavior in the one-dimensional case. Wave transmission is studied by means of material acoustic parameter value. The objective of this paper is to present a computational study of propagating pulsation and harmonic waves in nonlinear media using quasi-linearization and subsequently utilized the central difference scheme. This study gives focus on longitudinal, one- dimensional wave propagation. In the finite difference scheme Non-linear system is reduced to a linear system by applying quasi-linearization method. The computed results exhibit good agreement on par with the selected non-liner wave propagation.

  18. Analysis of Climatic and Environmental Changes Using CLEARS Web-GIS Information-Computational System: Siberia Case Study

    NASA Astrophysics Data System (ADS)

    Titov, A. G.; Gordov, E. P.; Okladnikov, I.; Shulgina, T. M.

    2011-12-01

    Analysis of recent climatic and environmental changes in Siberia performed on the basis of the CLEARS (CLimate and Environment Analysis and Research System) information-computational system is presented. The system was developed using the specialized software framework for rapid development of thematic information-computational systems based on Web-GIS technologies. It comprises structured environmental datasets, computational kernel, specialized web portal implementing web mapping application logic, and graphical user interface. Functional capabilities of the system include a number of procedures for mathematical and statistical analysis, data processing and visualization. At present a number of georeferenced datasets is available for processing including two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 and ERA Interim Reanalysis, meteorological observation data for the territory of the former USSR, and others. Firstly, using functionality of the computational kernel employing approved statistical methods it was shown that the most reliable spatio-temporal characteristics of surface temperature and precipitation in Siberia in the second half of 20th and beginning of 21st centuries are provided by ERA-40/ERA Interim Reanalysis and APHRODITE JMA Reanalysis, respectively. Namely those Reanalyses are statistically consistent with reliable in situ meteorological observations. Analysis of surface temperature and precipitation dynamics for the territory of Siberia performed on the base of the developed information-computational system reveals fine spatial and temporal details in heterogeneous patterns obtained for the region earlier. Dynamics of bioclimatic indices determining climate change impact on structure and functioning of regional vegetation cover was investigated as well. Analysis shows significant positive trends of growing season length accompanied by statistically significant increase of sum of growing degree days and total annual precipitation over the south of Western Siberia. In particular, we conclude that analysis of trends of growing season length, sum of growing degree-days and total precipitation during the growing season reveals a tendency to an increase of vegetation ecosystems productivity across the south of Western Siberia (55°-60°N, 59°-84°E) in the past several decades. The developed system functionality providing instruments for comparison of modeling and observational data and for reliable climatological analysis allowed us to obtain new results characterizing regional manifestations of global change. It should be added that each analysis performed using the system leads also to generation of the archive of spatio-temporal data fields ready for subsequent usage by other specialists. In particular, the archive of bioclimatic indices obtained will allow performing further detailed studies of interrelations between local climate and vegetation cover changes, including changes of carbon uptake related to variations of types and amount of vegetation and spatial shift of vegetation zones. This work is partially supported by RFBR grants #10-07-00547 and #11-05-01190-a, SB RAS Basic Program Projects 4.31.1.5 and 4.31.2.7.

  19. A novel method for landslide displacement prediction by integrating advanced computational intelligence algorithms.

    PubMed

    Zhou, Chao; Yin, Kunlong; Cao, Ying; Ahmed, Bayes; Fu, Xiaolin

    2018-05-08

    Landslide displacement prediction is considered as an essential component for developing early warning systems. The modelling of conventional forecast methods requires enormous monitoring data that limit its application. To conduct accurate displacement prediction with limited data, a novel method is proposed and applied by integrating three computational intelligence algorithms namely: the wavelet transform (WT), the artificial bees colony (ABC), and the kernel-based extreme learning machine (KELM). At first, the total displacement was decomposed into several sub-sequences with different frequencies using the WT. Next each sub-sequence was predicted separately by the KELM whose parameters were optimized by the ABC. Finally the predicted total displacement was obtained by adding all the predicted sub-sequences. The Shuping landslide in the Three Gorges Reservoir area in China was taken as a case study. The performance of the new method was compared with the WT-ELM, ABC-KELM, ELM, and the support vector machine (SVM) methods. Results show that the prediction accuracy can be improved by decomposing the total displacement into sub-sequences with various frequencies and by predicting them separately. The ABC-KELM algorithm shows the highest prediction capacity followed by the ELM and SVM. Overall, the proposed method achieved excellent performance both in terms of accuracy and stability.

  20. Computer Aided Instruction (CAI) for the Shipboard Nontactical ADP Program (SNAP). Interim report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, L.D.; Hammons, C.E.; Hume, R.

    Oak Ridge National Laboratory is developing a prototype computer aided instruction package for the Navy Management Systems Support Office. This report discusses the background of the project and the progress to date including a description of the software design, problems encountered, solutions found, and recommendations. The objective of this project is to provide a prototype that will enhance training and can be used as a shipboard refresher and retraining tool. The prototype system will be installed onboard ships where Navy personnel will have ready access to the training. The subsequent testing and evaluation of the prototype could provide the basismore » for a Navy-wide effort to implement computer aided instruction. The work to date has followed a rigorous structured analysis methodology based on the Yourdon/DeMarco techniques. A set of data flow diagrams and a data dictionary are included in the appendices. The problems encountered revolve around requirements to use existing hardware, software, and programmer capabilities for development, implementation, and maintenance of the instructional software. Solutions have been developed which will allow the software to exist in the given environment and still provide advanced features not available in commercial courses.« less

  1. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  2. Dynamics of elastic systems

    NASA Astrophysics Data System (ADS)

    Sankovich, Vladimir

    1998-12-01

    The goal of this paper is to build a consistent physical theory of the dynamics of the bat-ball interaction. This requires creating realistic models for both the softball bat and the softball. Some of the features of these models are known phenomenologically, from experiments conducted in our laboratory, others will be introduced and computed from first principles here for the first time. Both interacting objects are treated from the viewpoint of the theory of elasticity, and it is shown how a computer can be used to accurately calculate all the relevant characteristics of batball collisions. It is shown also how the major elastic parameters of the material constituting the interior of a softball can be determined using the existing experimental data. These parameters, such as the Young's modulus, the Poisson ratio and the damping coefficient are vital for the accurate description of the ball's dynamics. We are demonstrating how the existing theories of the elastic behavior of solid bars and hollow shells can be augmented to simplify the resulting equations and make the subsequent computer analysis feasible. The standard system of fourth-order PDE's is reduced to a system of the second order, because of the inclusion of the usually ignored effects of the shear forces in the bat.

  3. Nonlinear Visco-Elastic Response of Composites via Micro-Mechanical Models

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Sridharan, Srinivasan

    2005-01-01

    Micro-mechanical models for a study of nonlinear visco-elastic response of composite laminae are developed and their performance compared. A single integral constitutive law proposed by Schapery and subsequently generalized to multi-axial states of stress is utilized in the study for the matrix material. This is used in conjunction with a computationally facile scheme in which hereditary strains are computed using a recursive relation suggested by Henriksen. Composite response is studied using two competing micro-models, viz. a simplified Square Cell Model (SSCM) and a Finite Element based self-consistent Cylindrical Model (FECM). The algorithm is developed assuming that the material response computations are carried out in a module attached to a general purpose finite element program used for composite structural analysis. It is shown that the SSCM as used in investigations of material nonlinearity can involve significant errors in the prediction of transverse Young's modulus and shear modulus. The errors in the elastic strains thus predicted are of the same order of magnitude as the creep strains accruing due to visco-elasticity. The FECM on the other hand does appear to perform better both in the prediction of elastic constants and the study of creep response.

  4. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  5. Computer-aided diagnosis in phase contrast imaging X-ray computed tomography for quantitative characterization of ex vivo human patellar cartilage.

    PubMed

    Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Glaser, Christian; Wismuller, Axel

    2013-10-01

    Visualization of ex vivo human patellar cartilage matrix through the phase contrast imaging X-ray computed tomography (PCI-CT) has been previously demonstrated. Such studies revealed osteoarthritis-induced changes to chondrocyte organization in the radial zone. This study investigates the application of texture analysis to characterizing such chondrocyte patterns in the presence and absence of osteoarthritic damage. Texture features derived from Minkowski functionals (MF) and gray-level co-occurrence matrices (GLCM) were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. These texture features were subsequently used in a machine learning task with support vector regression to classify ROIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver operating characteristic curve (AUC). The best classification performance was observed with the MF features perimeter (AUC: 0.94 ±0.08 ) and "Euler characteristic" (AUC: 0.94 ±0.07 ), and GLCM-derived feature "Correlation" (AUC: 0.93 ±0.07). These results suggest that such texture features can provide a detailed characterization of the chondrocyte organization in the cartilage matrix, enabling classification of cartilage as healthy or osteoarthritic with high accuracy.

  6. A Novel Stimulus Artifact Removal Technique for High-Rate Electrical Stimulation

    PubMed Central

    Heffer, Leon F; Fallon, James B

    2008-01-01

    Electrical stimulus artifact corrupting electrophysiological recordings often make the subsequent analysis of the underlying neural response difficult. This is particularly evident when investigating short-latency neural activity in response to high-rate electrical stimulation. We developed and evaluated an off-line technique for the removal of stimulus artifact from electrophysiological recordings. Pulsatile electrical stimulation was presented at rates of up to 5000 pulses/s during extracellular recordings of guinea pig auditory nerve fibers. Stimulus artifact was removed by replacing the sample points at each stimulus artifact event with values interpolated along a straight line, computed from neighbouring sample points. This technique required only that artifact events be identifiable and that the artifact duration remained less than both the inter-stimulus interval and the time course of the action potential. We have demonstrated that this computationally efficient sample-and-interpolate technique removes the stimulus artifact with minimal distortion of the action potential waveform. We suggest that this technique may have potential applications in a range of electrophysiological recording systems. PMID:18339428

  7. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  8. Theoretical models for duct acoustic propagation and radiation

    NASA Technical Reports Server (NTRS)

    Eversman, Walter

    1991-01-01

    The development of computational methods in acoustics has led to the introduction of analysis and design procedures which model the turbofan inlet as a coupled system, simultaneously modeling propagation and radiation in the presence of realistic internal and external flows. Such models are generally large, require substantial computer speed and capacity, and can be expected to be used in the final design stages, with the simpler models being used in the early design iterations. Emphasis is given to practical modeling methods that have been applied to the acoustical design problem in turbofan engines. The mathematical model is established and the simplest case of propagation in a duct with hard walls is solved to introduce concepts and terminologies. An extensive overview is given of methods for the calculation of attenuation in uniform ducts with uniform flow and with shear flow. Subsequent sections deal with numerical techniques which provide an integrated representation of duct propagation and near- and far-field radiation for realistic geometries and flight conditions.

  9. Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    2001-01-01

    This report includes the results of a research in which the COmposite Durability STRuctural ANalysis (CODSTRAN) computational simulation capabilities were augmented and applied to various structures for demonstration of the new features and verification. The first chapter of this report provides an introduction to the computational simulation or virtual laboratory approach for the assessment of damage and fracture progression characteristics in composite structures. The second chapter outlines the details of the overall methodology used, including the failure criteria and the incremental/iterative loading procedure with the definitions of damage, fracture, and equilibrium states. The subsequent chapters each contain an augmented feature of the code and/or demonstration examples. All but one of the presented examples contains laminated composite structures with various fiber/matrix constituents. For each structure simulated, damage initiation and progression mechanisms are identified and the structural damage tolerance is quantified at various degradation stages. Many chapters contain the simulation of defective and defect free structures to evaluate the effects of existing defects on structural durability.

  10. F-18-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Appearance of Extramedullary Hematopoesis in a Case of Primary Myelofibrosis

    PubMed Central

    Mukherjee, Anirban; Bal, Chandrasekhar; Tripathi, Madhavi; Das, Chandan Jyoti; Shamim, Shamim Ahmed

    2017-01-01

    A 44-year-old female with known primary myelofibrosis presented with shortness of breath. High Resolution Computed Tomography thorax revealed large heterogeneously enhancing extraparenchymal soft tissue density mass involving bilateral lung fields. F-18-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography revealed mildly FDG avid soft tissue density mass with specks of calcification involving bilateral lung fields, liver, and spleen. Subsequent histopathologic evaluation from the right lung mass was suggestive of extramedullary hematopoesis. PMID:28533647

  11. Optimization of thermal protection systems for the space vehicle. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development of the computational techniques for the design optimization of thermal protection systems for the space shuttle vehicle are discussed. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in FORTRAN IV for CDC 6400 computer, but it was subsequently converted to the FORTRAN V language to be used on the Univac 1108.

  12. Animated analysis of geoscientific datasets: An interactive graphical application

    NASA Astrophysics Data System (ADS)

    Morse, Peter; Reading, Anya; Lueg, Christopher

    2017-12-01

    Geoscientists are required to analyze and draw conclusions from increasingly large volumes of data. There is a need to recognise and characterise features and changing patterns of Earth observables within such large datasets. It is also necessary to identify significant subsets of the data for more detailed analysis. We present an innovative, interactive software tool and workflow to visualise, characterise, sample and tag large geoscientific datasets from both local and cloud-based repositories. It uses an animated interface and human-computer interaction to utilise the capacity of human expert observers to identify features via enhanced visual analytics. 'Tagger' enables users to analyze datasets that are too large in volume to be drawn legibly on a reasonable number of single static plots. Users interact with the moving graphical display, tagging data ranges of interest for subsequent attention. The tool provides a rapid pre-pass process using fast GPU-based OpenGL graphics and data-handling and is coded in the Quartz Composer visual programing language (VPL) on Mac OSX. It makes use of interoperable data formats, and cloud-based (or local) data storage and compute. In a case study, Tagger was used to characterise a decade (2000-2009) of data recorded by the Cape Sorell Waverider Buoy, located approximately 10 km off the west coast of Tasmania, Australia. These data serve as a proxy for the understanding of Southern Ocean storminess, which has both local and global implications. This example shows use of the tool to identify and characterise 4 different types of storm and non-storm events during this time. Events characterised in this way are compared with conventional analysis, noting advantages and limitations of data analysis using animation and human interaction. Tagger provides a new ability to make use of humans as feature detectors in computer-based analysis of large-volume geosciences and other data.

  13. Guidelines for Preparation of a Scientific Paper

    PubMed Central

    Kosiba, Margaret M.

    1988-01-01

    Even the experienced scientific writer may have difficulty transferring research results to clear, concise, publishable words. To assist the beginning scientific writer, guidelines are proposed that will provide direction for determining a topic, developing protocols, collecting data, using computers for analysis and word processing, incorporating copyediting notations, consulting scientific writing manuals, and developing sound writing habits. Guidelines for writing each section of a research paper are described to help the writer prepare the title page, introduction, materials and methods, results, and discussion sections of the paper, as well as the acknowledgments and references. Procedures for writing the first draft and subsequent revisions include a checklist of structural and stylistic problems and common errors in English usage. PMID:3339646

  14. Rock failure analysis by combined thermal weakening and water jet impact

    NASA Technical Reports Server (NTRS)

    Nayfeh, A. H.

    1976-01-01

    The influence of preheating on the initiation of fracture in rocks subjected to the impingement of a continuous water jet is studied. Preheating the rock is assumed to degrade its mechanical properties and strength in accordance with existing experimental data. The water jet is assumed to place a quasi-static loading on the surface of the rock. The loading is approximated by elementary functions which permit analytic computation of the induced stresses in a rock half-space. The resulting stresses are subsequently coupled with the Griffith criteria for tensile failure to estimate the change, due to heating, in the critical stagnation pressure and velocity of the water jet required to cause failure in the rock.

  15. The approximation of anomalous magnetic field by array of magnetized rods

    NASA Astrophysics Data System (ADS)

    Denis, Byzov; Lev, Muravyev; Natalia, Fedorova

    2017-07-01

    The method for calculation the vertical component of an anomalous magnetic field from its absolute value is presented. Conversion is based on the approximation of magnetic induction module anomalies by the set of singular sources and the subsequent calculation for the vertical component of the field with the chosen distribution. The rods that are uniformly magnetized along their axis were used as a set of singular sources. Applicability analysis of different methods of nonlinear optimization for solving the given task was carried out. The algorithm is implemented using the parallel computing technology on the NVidia GPU. The approximation and calculation of vertical component is demonstrated for regional magnetic field of North Eurasia territories.

  16. Protein expression in Arabidopsis thaliana after chronic clinorotation

    NASA Technical Reports Server (NTRS)

    Piastuch, W. C.; Brown, C. S.

    1995-01-01

    Soluble protein expression in Arabidopsis thaliana L. (Heynh.) leaf and stem tissue was examined after chronic clinorotation. Seeds of Arabidopsis were germinated and plants grown to maturity on horizontal or vertical slow-rotating clinostats (1 rpm) or in stationary vertical control units. Total soluble proteins and in vivo-labeled soluble proteins isolated from these plants were analyzed by two-dimensional SDS PAGE and subsequent fluorography. Visual and computer analysis of the resulting protein patterns showed no significant differences in either total protein expression or in active protein synthesis between horizontal clinorotation and vertical controls in the Arabidopsis leaf and stem tissue. These results show chronic clinorotation does not cause gross changes in protein expression in Arabidopsis.

  17. Computer-assisted three-dimensional reconstructions of ( sup 14 C)-2-deoxy-D-glucose metabolism in cat lumbosacral spinal cord following cutaneous stimulation of the hindfoot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crockett, D.P.; Smith, W.K.; Proshansky, E.

    1989-10-08

    We report on computer-assisted three-dimensional reconstruction of spinal cord activity associated with stimulation of the plantar cushion (PC) as revealed by (14C)-2-deoxy-D-glucose (2-DG) serial autoradiographs. Moderate PC stimulation in cats elicits a reflex phasic plantar flexion of the toes. Four cats were chronically spinalized at about T6 under barbiturate anesthesia. Four to 11 days later, the cats were injected (i.v.) with 2-DG (100 microCi/kg) and the PC was electrically stimulated with needle electrodes at 2-5 times threshold for eliciting a reflex. Following stimulation, the spinal cord was processed for autoradiography. Subsequently, autoradiographs, representing approximately 8-18 mm from spinal segments L6-S1,more » were digitized for computer analysis and 3-D reconstruction. Several strategies of analysis were employed: (1) Three-dimensional volume images were color-coded to represent different levels of functional activity. (2) On the reconstructed volumes, virtual sections were made in the horizontal, sagittal, and transverse planes to view regions of 2-DG activity. (3) In addition, we were able to sample different regions within the grey and white matter semi-quantitatively (i.e., pixel intensity) from section to section to reveal differences between ipsi- and contralateral activity, as well as possible variation between sections. These analyses revealed 2-DG activity associated with moderate PC stimulation, not only in the ipsilateral dorsal horn as we had previously demonstrated, but also in both the ipsilateral and contralateral ventral horns, as well as in the intermediate grey matter. The use of novel computer analysis techniques--combined with an unanesthetized preparation--enabled us to demonstrate that the increased metabolic activity in the lumbosacral spinal cord associated with PC stimulation was much more extensive than had heretofore been observed.« less

  18. Design and implementation of highly parallel pipelined VLSI systems

    NASA Astrophysics Data System (ADS)

    Delange, Alphonsus Anthonius Jozef

    A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.

  19. Sickle cell disease diagnosis based on spatio-temporal cell dynamics analysis using 3D printed shearing digital holographic microscopy.

    PubMed

    Javidi, Bahram; Markman, Adam; Rawat, Siddharth; O'Connor, Timothy; Anand, Arun; Andemariam, Biree

    2018-05-14

    We present a spatio-temporal analysis of cell membrane fluctuations to distinguish healthy patients from patients with sickle cell disease. A video hologram containing either healthy red blood cells (h-RBCs) or sickle cell disease red blood cells (SCD-RBCs) was recorded using a low-cost, compact, 3D printed shearing interferometer. Reconstructions were created for each hologram frame (time steps), forming a spatio-temporal data cube. Features were extracted by computing the standard deviations and the mean of the height fluctuations over time and for every location on the cell membrane, resulting in two-dimensional standard deviation and mean maps, followed by taking the standard deviations of these maps. The optical flow algorithm was used to estimate the apparent motion fields between subsequent frames (reconstructions). The standard deviation of the magnitude of the optical flow vectors across all frames was then computed. In addition, seven morphological cell (spatial) features based on optical path length were extracted from the cells to further improve the classification accuracy. A random forest classifier was trained to perform cell identification to distinguish between SCD-RBCs and h-RBCs. To the best of our knowledge, this is the first report of machine learning assisted cell identification and diagnosis of sickle cell disease based on cell membrane fluctuations and morphology using both spatio-temporal and spatial analysis.

  20. Computational analysis of antibody dynamics identifies recent HIV-1 infection.

    PubMed

    Seaton, Kelly E; Vandergrift, Nathan A; Deal, Aaron W; Rountree, Wes; Bainbridge, John; Grebe, Eduard; Anderson, David A; Sawant, Sheetal; Shen, Xiaoying; Yates, Nicole L; Denny, Thomas N; Liao, Hua-Xin; Haynes, Barton F; Robb, Merlin L; Parkin, Neil; Santos, Breno R; Garrett, Nigel; Price, Matthew A; Naniche, Denise; Duerr, Ann C; Keating, Sheila; Hampton, Dylan; Facente, Shelley; Marson, Kara; Welte, Alex; Pilcher, Christopher D; Cohen, Myron S; Tomaras, Georgia D

    2017-12-21

    Accurate HIV-1 incidence estimation is critical to the success of HIV-1 prevention strategies. Current assays are limited by high false recent rates (FRRs) in certain populations and a short mean duration of recent infection (MDRI). Dynamic early HIV-1 antibody response kinetics were harnessed to identify biomarkers for improved incidence assays. We conducted retrospective analyses on circulating antibodies from known recent and longstanding infections and evaluated binding and avidity measurements of Env and non-Env antigens and multiple antibody forms (i.e., IgG, IgA, IgG3, IgG4, dIgA, and IgM) in a diverse panel of 164 HIV-1-infected participants (clades A, B, C). Discriminant function analysis identified an optimal set of measurements that were subsequently evaluated in a 324-specimen blinded biomarker validation panel. These biomarkers included clade C gp140 IgG3, transmitted/founder clade C gp140 IgG4 avidity, clade B gp140 IgG4 avidity, and gp41 immunodominant region IgG avidity. MDRI was estimated at 215 day or alternatively, 267 days. FRRs in untreated and treated subjects were 5.0% and 3.6%, respectively. Thus, computational analysis of dynamic HIV-1 antibody isotype and antigen interactions during infection enabled design of a promising HIV-1 recency assay for improved cross-sectional incidence estimation.

  1. Computational analysis of antibody dynamics identifies recent HIV-1 infection

    PubMed Central

    Seaton, Kelly E.; Vandergrift, Nathan A.; Deal, Aaron W.; Rountree, Wes; Anderson, David A.; Sawant, Sheetal; Shen, Xiaoying; Yates, Nicole L.; Denny, Thomas N.; Haynes, Barton F.; Robb, Merlin L.; Parkin, Neil; Santos, Breno R.; Price, Matthew A.; Naniche, Denise; Duerr, Ann C.; Hampton, Dylan; Facente, Shelley; Marson, Kara; Welte, Alex; Pilcher, Christopher D.; Cohen, Myron S.

    2017-01-01

    Accurate HIV-1 incidence estimation is critical to the success of HIV-1 prevention strategies. Current assays are limited by high false recent rates (FRRs) in certain populations and a short mean duration of recent infection (MDRI). Dynamic early HIV-1 antibody response kinetics were harnessed to identify biomarkers for improved incidence assays. We conducted retrospective analyses on circulating antibodies from known recent and longstanding infections and evaluated binding and avidity measurements of Env and non-Env antigens and multiple antibody forms (i.e., IgG, IgA, IgG3, IgG4, dIgA, and IgM) in a diverse panel of 164 HIV-1–infected participants (clades A, B, C). Discriminant function analysis identified an optimal set of measurements that were subsequently evaluated in a 324-specimen blinded biomarker validation panel. These biomarkers included clade C gp140 IgG3, transmitted/founder clade C gp140 IgG4 avidity, clade B gp140 IgG4 avidity, and gp41 immunodominant region IgG avidity. MDRI was estimated at 215 day or alternatively, 267 days. FRRs in untreated and treated subjects were 5.0% and 3.6%, respectively. Thus, computational analysis of dynamic HIV-1 antibody isotype and antigen interactions during infection enabled design of a promising HIV-1 recency assay for improved cross-sectional incidence estimation. PMID:29263306

  2. Matrix metalloproteinases: structures, evolution, and diversification.

    PubMed

    Massova, I; Kotra, L P; Fridman, R; Mobashery, S

    1998-09-01

    A comprehensive sequence alignment of 64 members of the family of matrix metalloproteinases (MMPs) for the entire sequences, and subsequently the catalytic and the hemopexin-like domains, have been performed. The 64 MMPs were selected from plants, invertebrates, and vertebrates. The analyses disclosed that as many as 23 distinct subfamilies of these proteins are known to exist. Information from the sequence alignments was correlated with structures, both crystallographic as well as computational, of the catalytic domains for the 23 representative members of the MMP family. A survey of the metal binding sites and two loops containing variable sequences of amino acids, which are important for substrate interactions, are discussed. The collective data support the proposal that the assembly of the domains into multidomain enzymes was likely to be an early evolutionary event. This was followed by diversification, perhaps in parallel among the MMPs, in a subsequent evolutionary time scale. Analysis indicates that a retrograde structure simplification may have accounted for the evolution of MMPs with simple domain constituents, such as matrilysin, from the larger and more elaborate enzymes.

  3. Influence of psychological factors on acute exacerbation of tension-type headache: Investigation by ecological momentary assessment.

    PubMed

    Kikuchi, Hiroe; Yoshiuchi, Kazuhiro; Ando, Tetsuya; Yamamoto, Yoshiharu

    2015-09-01

    In this study, we investigated whether psychological factors were associated with subsequent acute exacerbation of tension-type headache (TTH) in a prospective and ecologically valid manner with computerized ecological momentary assessment. Eighteen women and five men with TTH wore watch-type computers that acted as an electronic diary for 1week. The subjects recorded momentary headache intensity, psychological stress, anxiety, and depressive mood with a visual analog scale of 0-100 approximately every 6h as well as when waking up, when going to bed, and at acute headache exacerbations. Multilevel logistic regression analysis with acute headache exacerbation occurrence as the outcome was conducted. Person-mean centering was applied to psychological factors to disaggregate between- and within-individual association. Momentary psychological stress was associated with subsequent increase in headache exacerbation within 3h [Odds Ratio (95% CI)=1.32 (1.07, 1.64) for 10-point increments] while the individual mean of psychological stress was not. These results support the possibility that psychological stress could trigger acute exacerbations of TTH. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Emotion regulation and its effects on mood improvement in response to an in vivo peer rejection challenge.

    PubMed

    Reijntjes, Albert; Stegge, Hedy; Terwogt, Mark Meerum; Kamphuis, Jan H; Telch, Michael J

    2006-11-01

    This study examined children's spontaneous use of behavioral emotion regulation (ER) strategies and their effects on subsequent mood change in response to an in vivo peer rejection manipulation. Participants (N = 186), ranging between 10 and 13 years of age, played a computer game based on the television show Survivor and were randomized to either peer rejection (being voted out of the game) or nonrejection control. In response to rejection, more than one third of the participants (38%) displayed a marked worsening (i.e., reliable change) in state mood. After receiving feedback, time spent on several behavioral ER strategies during a 5-minute postfeedback period was assessed. At the end of the postfeedback period, children's cognitive activity was also assessed. More time spent on behavioral distraction was positively linked to subsequent increases in positive affect, whereas the reverse pattern was found for disengagement/passive behavior. Moreover, higher endorsement ratings for the strategy of "cognitive analysis" were associated with stronger increases in negative affect. Copyright 2006 APA, all rights reserved.

  5. Computers in the examination room and the electronic health record: physicians' perceived impact on clinical encounters before and after full installation and implementation.

    PubMed

    Doyle, Richard J; Wang, Nina; Anthony, David; Borkan, Jeffrey; Shield, Renee R; Goldman, Roberta E

    2012-10-01

    We compared physicians' self-reported attitudes and behaviours regarding electronic health record (EHR) use before and after installation of computers in patient examination rooms and transition to full implementation of an EHR in a family medicine training practice to identify anticipated and observed effects these changes would have on physicians' practices and clinical encounters. We conducted two individual qualitative interviews with family physicians. The first interview was before and second interview was 8 months later after full implementation of an EHR and computer installation in the examination rooms. Data were analysed through project team discussions and subsequent coding with qualitative analysis software. At the first interviews, physicians frequently expressed concerns about the potential negative effect of the EHR on quality of care and physician-patient interaction, adequacy of their skills in EHR use and privacy and confidentiality concerns. Nevertheless, most physicians also anticipated multiple benefits, including improved accessibility of patient data and online health information. In the second interviews, physicians reported that their concerns did not persist. Many anticipated benefits were realized, appearing to facilitate collaborative physician-patient relationships. Physicians reported a greater teaching role with patients and sharing online medical information and treatment plan decisions. Before computer installation and full EHR implementation, physicians expressed concerns about the impact of computer use on patient care. After installation and implementation, however, many concerns were mitigated. Using computers in the examination rooms to document and access patients' records along with online medical information and decision-making tools appears to contribute to improved physician-patient communication and collaboration.

  6. Predictive uncertainty analysis of plume distribution for geological carbon sequestration using sparse-grid Bayesian method

    NASA Astrophysics Data System (ADS)

    Shi, X.; Zhang, G.

    2013-12-01

    Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.

  7. Analogs of methyllycaconitine as novel noncompetitive inhibitors of nicotinic receptors: pharmacological characterization, computational modeling, and pharmacophore development.

    PubMed

    McKay, Dennis B; Chang, Cheng; González-Cestari, Tatiana F; McKay, Susan B; El-Hajj, Raed A; Bryant, Darrell L; Zhu, Michael X; Swaan, Peter W; Arason, Kristjan M; Pulipaka, Aravinda B; Orac, Crina M; Bergmeier, Stephen C

    2007-05-01

    As a novel approach to drug discovery involving neuronal nicotinic acetylcholine receptors (nAChRs), our laboratory targeted nonagonist binding sites (i.e., noncompetitive binding sites, negative allosteric binding sites) located on nAChRs. Cultured bovine adrenal cells were used as neuronal models to investigate interactions of 67 analogs of methyllycaconitine (MLA) on native alpha3beta4* nAChRs. The availability of large numbers of structurally related molecules presents a unique opportunity for the development of pharmacophore models for noncompetitive binding sites. Our MLA analogs inhibited nicotine-mediated functional activation of both native and recombinant alpha3beta4* nAChRs with a wide range of IC(50) values (0.9-115 microM). These analogs had little or no inhibitory effects on agonist binding to native or recombinant nAChRs, supporting noncompetitive inhibitory activity. Based on these data, two highly predictive 3D quantitative structure-activity relationship (comparative molecular field analysis and comparative molecular similarity index analysis) models were generated. These computational models were successfully validated and provided insights into the molecular interactions of MLA analogs with nAChRs. In addition, a pharmacophore model was constructed to analyze and visualize the binding requirements to the analog binding site. The pharmacophore model was subsequently applied to search structurally diverse molecular databases to prospectively identify novel inhibitors. The rapid identification of eight molecules from database mining and our successful demonstration of in vitro inhibitory activity support the utility of these computational models as novel tools for the efficient retrieval of inhibitors. These results demonstrate the effectiveness of computational modeling and pharmacophore development, which may lead to the identification of new therapeutic drugs that target novel sites on nAChRs.

  8. Strategy and gaps for modeling, simulation, and control of hybrid systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabiti, Cristian; Garcia, Humberto E.; Hovsapian, Rob

    2015-04-01

    The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers,more » and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled, dynamic energy systems requires multiple simulation tools, potentially developed in several programming languages and resolved on separate time scales. Whereas further investigation and development of hybrid concepts will provide a more complete understanding of the joint computational and physical modeling needs, this report highlights areas in which co-simulation capabilities are warranted. The current development status, quality assurance, availability and maintainability of simulation tools that are currently available for hybrid systems modeling is presented. Existing gaps in the modeling and simulation toolsets and development needs are subsequently discussed. This effort will feed into a broader Roadmap activity for designing, developing, and demonstrating hybrid energy systems.« less

  9. Semen molecular and cellular features: these parameters can reliably predict subsequent ART outcome in a goat model

    PubMed Central

    Berlinguer, Fiammetta; Madeddu, Manuela; Pasciu, Valeria; Succu, Sara; Spezzigu, Antonio; Satta, Valentina; Mereu, Paolo; Leoni, Giovanni G; Naitana, Salvatore

    2009-01-01

    Currently, the assessment of sperm function in a raw or processed semen sample is not able to reliably predict sperm ability to withstand freezing and thawing procedures and in vivo fertility and/or assisted reproductive biotechnologies (ART) outcome. The aim of the present study was to investigate which parameters among a battery of analyses could predict subsequent spermatozoa in vitro fertilization ability and hence blastocyst output in a goat model. Ejaculates were obtained by artificial vagina from 3 adult goats (Capra hircus) aged 2 years (A, B and C). In order to assess the predictive value of viability, computer assisted sperm analyzer (CASA) motility parameters and ATP intracellular concentration before and after thawing and of DNA integrity after thawing on subsequent embryo output after an in vitro fertility test, a logistic regression analysis was used. Individual differences in semen parameters were evident for semen viability after thawing and DNA integrity. Results of IVF test showed that spermatozoa collected from A and B lead to higher cleavage rates (0 < 0.01) and blastocysts output (p < 0.05) compared with C. Logistic regression analysis model explained a deviance of 72% (p < 0.0001), directly related with the mean percentage of rapid spermatozoa in fresh semen (p < 0.01), semen viability after thawing (p < 0.01), and with two of the three comet parameters considered, i.e tail DNA percentage and comet length (p < 0.0001). DNA integrity alone had a high predictive value on IVF outcome with frozen/thawed semen (deviance explained: 57%). The model proposed here represents one of the many possible ways to explain differences found in embryo output following IVF with different semen donors and may represent a useful tool to select the most suitable donors for semen cryopreservation. PMID:19900288

  10. Daily online testing in large classes: boosting college performance while reducing achievement gaps.

    PubMed

    Pennebaker, James W; Gosling, Samuel D; Ferrell, Jason D

    2013-01-01

    An in-class computer-based system, that included daily online testing, was introduced to two large university classes. We examined subsequent improvements in academic performance and reductions in the achievement gaps between lower- and upper-middle class students in academic performance. Students (N = 901) brought laptop computers to classes and took daily quizzes that provided immediate and personalized feedback. Student performance was compared with the same data for traditional classes taught previously by the same instructors (N = 935). Exam performance was approximately half a letter grade above previous semesters, based on comparisons of identical questions asked from earlier years. Students in the experimental classes performed better in other classes, both in the semester they took the course and in subsequent semester classes. The new system resulted in a 50% reduction in the achievement gap as measured by grades among students of different social classes. These findings suggest that frequent consequential quizzing should be used routinely in large lecture courses to improve performance in class and in other concurrent and subsequent courses.

  11. Detection of a gravitropism phenotype in glutamate receptor-like 3.3 mutants of Arabidopsis thaliana using machine vision and computation.

    PubMed

    Miller, Nathan D; Durham Brooks, Tessa L; Assadi, Amir H; Spalding, Edgar P

    2010-10-01

    Gene disruption frequently produces no phenotype in the model plant Arabidopsis thaliana, complicating studies of gene function. Functional redundancy between gene family members is one common explanation but inadequate detection methods could also be responsible. Here, newly developed methods for automated capture and processing of time series of images, followed by computational analysis employing modified linear discriminant analysis (LDA) and wavelet-based differentiation, were employed in a study of mutants lacking the Glutamate Receptor-Like 3.3 gene. Root gravitropism was selected as the process to study with high spatiotemporal resolution because the ligand-gated Ca(2+)-permeable channel encoded by GLR3.3 may contribute to the ion fluxes associated with gravity signal transduction in roots. Time series of root tip angles were collected from wild type and two different glr3.3 mutants across a grid of seed-size and seedling-age conditions previously found to be important to gravitropism. Statistical tests of average responses detected no significant difference between populations, but LDA separated both mutant alleles from the wild type. After projecting the data onto LDA solution vectors, glr3.3 mutants displayed greater population variance than the wild type in all four conditions. In three conditions the projection means also differed significantly between mutant and wild type. Wavelet analysis of the raw response curves showed that the LDA-detected phenotypes related to an early deceleration and subsequent slower-bending phase in glr3.3 mutants. These statistically significant, heritable, computation-based phenotypes generated insight into functions of GLR3.3 in gravitropism. The methods could be generally applicable to the study of phenotypes and therefore gene function.

  12. Detection of a Gravitropism Phenotype in glutamate receptor-like 3.3 Mutants of Arabidopsis thaliana Using Machine Vision and Computation

    PubMed Central

    Miller, Nathan D.; Durham Brooks, Tessa L.; Assadi, Amir H.; Spalding, Edgar P.

    2010-01-01

    Gene disruption frequently produces no phenotype in the model plant Arabidopsis thaliana, complicating studies of gene function. Functional redundancy between gene family members is one common explanation but inadequate detection methods could also be responsible. Here, newly developed methods for automated capture and processing of time series of images, followed by computational analysis employing modified linear discriminant analysis (LDA) and wavelet-based differentiation, were employed in a study of mutants lacking the Glutamate Receptor-Like 3.3 gene. Root gravitropism was selected as the process to study with high spatiotemporal resolution because the ligand-gated Ca2+-permeable channel encoded by GLR3.3 may contribute to the ion fluxes associated with gravity signal transduction in roots. Time series of root tip angles were collected from wild type and two different glr3.3 mutants across a grid of seed-size and seedling-age conditions previously found to be important to gravitropism. Statistical tests of average responses detected no significant difference between populations, but LDA separated both mutant alleles from the wild type. After projecting the data onto LDA solution vectors, glr3.3 mutants displayed greater population variance than the wild type in all four conditions. In three conditions the projection means also differed significantly between mutant and wild type. Wavelet analysis of the raw response curves showed that the LDA-detected phenotypes related to an early deceleration and subsequent slower-bending phase in glr3.3 mutants. These statistically significant, heritable, computation-based phenotypes generated insight into functions of GLR3.3 in gravitropism. The methods could be generally applicable to the study of phenotypes and therefore gene function. PMID:20647506

  13. The physical vulnerability of elements at risk: a methodology based on fluid and classical mechanics

    NASA Astrophysics Data System (ADS)

    Mazzorana, B.; Fuchs, S.; Levaggi, L.

    2012-04-01

    The impacts of the flood events occurred in autumn 2011 in the Italian regions Liguria and Tuscany revived the engagement of the public decision makers to enhance in synergy flood control and land use planning. In this context, the design of efficient flood risk mitigation strategies and their subsequent implementation critically relies on a careful vulnerability analysis of both, the immobile and mobile elements at risk potentially exposed to flood hazards. Based on fluid and classical mechanics notions we developed computation schemes enabling for a dynamic vulnerability and risk analysis facing a broad typological variety of elements at risk. The methodological skeleton consists of (1) hydrodynamic computation of the time-varying flood intensities resulting for each element at risk in a succession of loading configurations; (2) modelling the mechanical response of the impacted elements through static, elasto-static and dynamic analyses; (3) characterising the mechanical response through proper structural damage variables and (4) economic valuation of the expected losses as a function of the quantified damage variables. From a computational perspective we coupled the description of the hydrodynamic flow behaviour and the induced structural modifications of the elements at risk exposed. Valuation methods, suitable to support a correct mapping from the value domains of the physical damage variables to the economic loss values are discussed. In such a way we target to complement from a methodological perspective the existing, mainly empirical, vulnerability and risk assessment approaches to refine the conceptual framework of the cost-benefit analysis. Moreover, we aim to support the design of effective flood risk mitigation strategies by diminishing the main criticalities within the systems prone to flood risk.

  14. Computational investigation of the human SOD1 mutant, Cys146Arg, that directs familial amyotrophic lateral sclerosis.

    PubMed

    Srinivasan, E; Rajasekaran, R

    2017-07-25

    The genetic substitution mutation of Cys146Arg in the SOD1 protein is predominantly found in the Japanese population suffering from familial amyotrophic lateral sclerosis (FALS). A complete study of the biophysical aspects of this particular missense mutation through conformational analysis and producing free energy landscapes could provide an insight into the pathogenic mechanism of ALS disease. In this study, we utilized general molecular dynamics simulations along with computational predictions to assess the structural characterization of the protein as well as the conformational preferences of monomeric wild type and mutant SOD1. Our static analysis, accomplished through multiple programs, predicted the deleterious and destabilizing effect of mutant SOD1. Subsequently, comparative molecular dynamic studies performed on the wild type and mutant SOD1 indicated a loss in the protein conformational stability and flexibility. We observed the mutational consequences not only in local but also in long-range variations in the structural properties of the SOD1 protein. Long-range intramolecular protein interactions decrease upon mutation, resulting in less compact structures in the mutant protein rather than in the wild type, suggesting that the mutant structures are less stable than the wild type SOD1. We also presented the free energy landscape to study the collective motion of protein conformations through principal component analysis for the wild type and mutant SOD1. Overall, the study assisted in revealing the cause of the structural destabilization and protein misfolding via structural characterization, secondary structure composition and free energy landscapes. Hence, the computational framework in our study provides a valuable direction for the search for the cure against fatal FALS.

  15. Guidelines for computer security in general practice.

    PubMed

    Schattner, Peter; Pleteshner, Catherine; Bhend, Heinz; Brouns, Johan

    2007-01-01

    As general practice becomes increasingly computerised, data security becomes increasingly important for both patient health and the efficient operation of the practice. To develop guidelines for computer security in general practice based on a literature review, an analysis of available information on current practice and a series of key stakeholder interviews. While the guideline was produced in the context of Australian general practice, we have developed a template that is also relevant for other countries. Current data on computer security measures was sought from Australian divisions of general practice. Semi-structured interviews were conducted with general practitioners (GPs), the medical software industry, senior managers within government responsible for health IT (information technology) initiatives, technical IT experts, divisions of general practice and a member of a health information consumer group. The respondents were asked to assess both the likelihood and the consequences of potential risks in computer security being breached. The study suggested that the most important computer security issues in general practice were: the need for a nominated IT security coordinator; having written IT policies, including a practice disaster recovery plan; controlling access to different levels of electronic data; doing and testing backups; protecting against viruses and other malicious codes; installing firewalls; undertaking routine maintenance of hardware and software; and securing electronic communication, for example via encryption. This information led to the production of computer security guidelines, including a one-page summary checklist, which were subsequently distributed to all GPs in Australia. This paper maps out a process for developing computer security guidelines for general practice. The specific content will vary in different countries according to their levels of adoption of IT, and cultural, technical and other health service factors. Making these guidelines relevant to local contexts should help maximise their uptake.

  16. From Survey to FEM Analysis for Documentation of Built Heritage: the Case Study of Villa Revedin-Bolasco

    NASA Astrophysics Data System (ADS)

    Guarnieri, A.; Fissore, F.; Masiero, A.; Di Donna, A.; Coppa, U.; Vettore, A.

    2017-05-01

    In the last decade advances in the fields of close-range photogrammetry, terrestrial laser scanning (TLS) and computer vision (CV) have enabled to collect different kind of information about a Cultural Heritage objects and to carry out highly accurate 3D models. Additionally, the integration between laser scanning technology and Finite Element Analysis (FEA) is gaining particular interest in recent years for structural analysis of built heritage, since the increasing computational capabilities allow to manipulate large datasets. In this note we illustrate the approach adopted for surveying, 3D modeling and structural analysis of Villa Revedin-Bolasco, a magnificent historical building located in the small walled town of Castelfranco Veneto, in northern Italy. In 2012 CIRGEO was charged by the University of Padova to carry out a survey of the Villa and Park, as preliminary step for subsequent restoration works. The inner geometry of the Villa was captured with two Leica Disto D3a BT hand-held laser meters, while the outer walls of the building were surveyed with a Leica C10 and a Faro Focus 3D 120 terrestrial laser scanners. Ancillary GNSS measurements were also collected for 3D laser model georeferencing. A solid model was then generated from the laser global point cloud in Rhinoceros software, and portion of it was used for simulation in a Finite Element Analysis (FEA). In the paper we discuss in detail all the steps and challenges addressed and solutions adopted concerning the survey, solid modeling and FEA from laser scanning data of the historical complex of Villa Revedin-Bolasco.

  17. Artificial bee colony algorithm for single-trial electroencephalogram analysis.

    PubMed

    Hsu, Wei-Yen; Hu, Ya-Ping

    2015-04-01

    In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  18. High Precision Thermal, Structural and Optical Analysis of an External Occulter Using a Common Model and the General Purpose Multi-Physics Analysis Tool Cielo

    NASA Technical Reports Server (NTRS)

    Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg

    2011-01-01

    The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.

  19. Dynamic analysis of flexible gear trains/transmissions - An automated approach

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Shareef, N. H.; Xie, M.

    1992-01-01

    In this paper an automated algorithmic method is presented for the dynamic analysis of geared trains/transmissions. These are treated as a system of interconnected flexible bodies. The procedure developed explains the switching of constraints with time as a result of the change in the contacting areas at the gear teeth. The elastic behavior of the system is studied through the employment of three-dimensional isoparametric elements having six degrees-of-freedom at each node. The contact between the bodies is assumed at the various nodes, which could be either a line or a plane. The kinematical expressions, together with the equations of motion using Kane's method, strain energy concepts, are presented in a matrix form suitable for computer implementation. The constraint Jacobian matrices are generated automatically based on the contact information between the bodies. The concepts of the relative velocity at the contacting points at the tooth pairs and the subsequent use of the transmission ratios in the analysis is presented.

  20. A basic analysis toolkit for biological sequences

    PubMed Central

    Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo

    2007-01-01

    This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802

  1. Integrated Software for Analyzing Designs of Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Philips, Alan D.

    2003-01-01

    Launch Vehicle Analysis Tool (LVA) is a computer program for preliminary design structural analysis of launch vehicles. Before LVA was developed, in order to analyze the structure of a launch vehicle, it was necessary to estimate its weight, feed this estimate into a program to obtain pre-launch and flight loads, then feed these loads into structural and thermal analysis programs to obtain a second weight estimate. If the first and second weight estimates differed, it was necessary to reiterate these analyses until the solution converged. This process generally took six to twelve person-months of effort. LVA incorporates text to structural layout converter, configuration drawing, mass properties generation, pre-launch and flight loads analysis, loads output plotting, direct solution structural analysis, and thermal analysis subprograms. These subprograms are integrated in LVA so that solutions can be iterated automatically. LVA incorporates expert-system software that makes fundamental design decisions without intervention by the user. It also includes unique algorithms based on extensive research. The total integration of analysis modules drastically reduces the need for interaction with the user. A typical solution can be obtained in 30 to 60 minutes. Subsequent runs can be done in less than two minutes.

  2. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    PubMed

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  3. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  4. Internal Carotid Artery Web as the Cause of Recurrent Cryptogenic Ischemic Stroke.

    PubMed

    Antigüedad-Muñoz, Jon; de la Riva, Patricia; Arenaza Choperena, Gorka; Muñoz Lopetegi, Amaia; Andrés Marín, Naiara; Fernández-Eulate, Gorka; Moreno Valladares, Manuel; Martínez Zabaleta, Maite

    2018-05-01

    Carotid artery web is considered an exceptional cause of recurrent ischemic strokes in the affected arterial territory. The underlying pathology proposed for this entity is an atypical fibromuscular dysplasia. We present the case of a 43-year-old woman with no cardiovascular risk factors who had experienced 2 cryptogenic ischemic strokes in the same arterial territory within an 11-month period. Although all diagnostic tests initially yielded normal results, detailed analysis of the computed tomography angiography images revealed a carotid web; catheter angiography subsequently confirmed the diagnosis. Carotid surgery was performed, since which time the patient has remained completely asymptomatic. The histological finding of intimal hyperplasia is consistent with previously reported cases of carotid artery web. Carotid artery web is an infrequent cause of stroke, and this diagnosis requires a high level of suspicion plus a detailed analysis of vascular imaging studies. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  5. Bulk viscosity of water in acoustic modal analysis and experiment

    NASA Astrophysics Data System (ADS)

    Kůrečka, Jan; Habán, Vladimír; Himr, Daniel

    2018-06-01

    Bulk viscosity is an important factor in the damping properties of fluid systems and exhibits frequency dependent behaviour. A comparison between modal analysis in ANSYS Acoustics, custom code and experimental data is presented in this paper. The measured system consists of closed ended water-filled steel pipes of different lengths. The influence of a pipe wall, flanges on both ends and longitudinal waves in the structural part were included in measurement evaluation. Therefore, the obtained values of sound speed and bulk viscosity are parameters of the fluid. A numerical simulation was carried out only using fluid volume in a range of bulk viscosity. Damping characteristics in this range were compared to measured values. The results show a significant influence of sound speed and subsequently, the use of sound speed value regressed from experimental data yields a better fit between the measurement and the computation.

  6. Study of living single cells in culture: automated recognition of cell behavior.

    PubMed

    Bodin, P; Papin, S; Meyer, C; Travo, P

    1988-07-01

    An automated system capable of analyzing the behavior, in real time, of single living cells in culture, in a noninvasive and nondestructive way, has been developed. A large number of cell positions in single culture dishes were recorded using a computer controlled, robotized microscope. During subsequent observations, binary images obtained from video image analysis of the microscope visual field allowed the identification of the recorded cells. These cells could be revisited automatically every few minutes. Long-term studies of the behavior of cells make possible the analysis of cellular locomotary and mitotic activities as well as determination of cell shape (chosen from a defined library) for several hours or days in a fully automated way with observations spaced up to 30 minutes. Short-term studies of the behavior of cells permit the study, in a semiautomatic way, of acute effects of drugs (5 to 15 minutes) on changes of surface area and length of cells.

  7. Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.

    PubMed

    Hsu, Wei-Yen

    2013-12-01

    In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.

  8. Experiments on the applicability of MAE techniques for predicting sound diffraction by irregular terrains. [Matched Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Berthelot, Yves H.; Pierce, Allan D.; Kearns, James A.

    1987-01-01

    The sound field diffracted by a single smooth hill of finite impedance is studied both analytically, within the context of the theory of Matched Asymptotic Expansions (MAE), and experimentally, under laboratory scale modeling conditions. Special attention is given to the sound field on the diffracting surface and throughout the transition region between the illuminated and the shadow zones. The MAE theory yields integral equations that are amenable to numerical computations. Experimental results are obtained with a spark source producing a pulse of 42 microsec duration and about 130 Pa at 1 m. The insertion loss of the hill is inferred from measurements of the acoustic signals at two locations in the field, with subsequent Fourier analysis on an IBM PC/AT. In general, experimental results support the predictions of the MAE theory, and provide a basis for the analysis of more complicated geometries.

  9. An improved principal component analysis based region matching method for fringe direction estimation

    NASA Astrophysics Data System (ADS)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  10. A robust anonymous biometric-based authenticated key agreement scheme for multi-server environments

    PubMed Central

    Huang, Yuanfei; Ma, Fangchao

    2017-01-01

    In order to improve the security in remote authentication systems, numerous biometric-based authentication schemes using smart cards have been proposed. Recently, Moon et al. presented an authentication scheme to remedy the flaws of Lu et al.’s scheme, and claimed that their improved protocol supports the required security properties. Unfortunately, we found that Moon et al.’s scheme still has weaknesses. In this paper, we show that Moon et al.’s scheme is vulnerable to insider attack, server spoofing attack, user impersonation attack and guessing attack. Furthermore, we propose a robust anonymous multi-server authentication scheme using public key encryption to remove the aforementioned problems. From the subsequent formal and informal security analysis, we demonstrate that our proposed scheme provides strong mutual authentication and satisfies the desirable security requirements. The functional and performance analysis shows that the improved scheme has the best secure functionality and is computational efficient. PMID:29121050

  11. A robust anonymous biometric-based authenticated key agreement scheme for multi-server environments.

    PubMed

    Guo, Hua; Wang, Pei; Zhang, Xiyong; Huang, Yuanfei; Ma, Fangchao

    2017-01-01

    In order to improve the security in remote authentication systems, numerous biometric-based authentication schemes using smart cards have been proposed. Recently, Moon et al. presented an authentication scheme to remedy the flaws of Lu et al.'s scheme, and claimed that their improved protocol supports the required security properties. Unfortunately, we found that Moon et al.'s scheme still has weaknesses. In this paper, we show that Moon et al.'s scheme is vulnerable to insider attack, server spoofing attack, user impersonation attack and guessing attack. Furthermore, we propose a robust anonymous multi-server authentication scheme using public key encryption to remove the aforementioned problems. From the subsequent formal and informal security analysis, we demonstrate that our proposed scheme provides strong mutual authentication and satisfies the desirable security requirements. The functional and performance analysis shows that the improved scheme has the best secure functionality and is computational efficient.

  12. The Effect of Task Planning on L2 Performance and L2 Development in Text-Based Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Hsu, Hsiu-Chen

    2017-01-01

    This study explored the effect of two planning conditions [the simultaneous use of rehearsal and careful online planning (ROP), and the careful online planning alone (OP)] on L2 production complexity and accuracy and the subsequent development of these two linguistic areas in the context of text-based synchronous computer-mediated communication.…

  13. Discussion of "Computational Electrocardiography: Revisiting Holter ECG Monitoring".

    PubMed

    Baumgartner, Christian; Caiani, Enrico G; Dickhaus, Hartmut; Kulikowski, Casimir A; Schiecke, Karin; van Bemmel, Jan H; Witte, Herbert

    2016-08-05

    This article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper "Computational Electrocardiography: Revisiting Holter ECG Monitoring" written by Thomas M. Deserno and Nikolaus Marx. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the paper of Deserno and Marx. In subsequent issues the discussion can continue through letters to the editor.

  14. Application of hybrid methodology to rotors in steady and maneuvering flight

    NASA Astrophysics Data System (ADS)

    Rajmohan, Nischint

    Helicopters are versatile flying machines that have capabilities that are unparalleled by fixed wing aircraft, such as operating in hover, performing vertical takeoff and landing on unprepared sites. This makes their use especially desirable in military and search-and-rescue operations. However, modern helicopters still suffer from high levels of noise and vibration caused by the physical phenomena occurring in the vicinity of the rotor blades. Therefore, improvement in rotorcraft design to reduce the noise and vibration levels requires understanding of the underlying physical phenomena, and accurate prediction capabilities of the resulting rotorcraft aeromechanics. The goal of this research is to study the aeromechanics of rotors in steady and maneuvering flight using hybrid Computational Fluid Dynamics (CFD) methodology. The hybrid CFD methodology uses the Navier-Stokes equations to solve the flow near the blade surface but the effect of the far wake is computed through the wake model. The hybrid CFD methodology is computationally efficient and its wake modeling approach is nondissipative making it an attractive tool to study rotorcraft aeromechanics. Several enhancements were made to the CFD methodology and it was coupled to a Computational Structural Dynamics (CSD) methodology to perform a trimmed aeroelastic analysis of a rotor in forward flight. The coupling analyses, both loose and tight were used to identify the key physical phenomena that affect rotors in different steady flight regimes. The modeling enhancements improved the airloads predictions for a variety of flight conditions. It was found that the tightly coupled method did not impact the loads significantly for steady flight conditions compared to the loosely coupled method. The coupling methodology was extended to maneuvering flight analysis by enhancing the computational and structural models to handle non-periodic flight conditions and vehicle motions in time accurate mode. The flight test control angles were employed to enable the maneuvering flight analysis. The fully coupled model provided the presence of three dynamic stall cycles on the rotor in maneuver. It is important to mention that analysis of maneuvering flight requires knowledge of the pilot input control pitch settings, and the vehicle states. As the result, these computational tools cannot be used for analysis of loads in a maneuver that has not been duplicated in a real flight. This is a significant limitation if these tools are to be selected during the design phase of a helicopter where its handling qualities are evaluated in different trajectories. Therefore, a methodology was developed to couple the CFD/CSD simulation with an inverse flight mechanics simulation to perform the maneuver analysis without using the flight test control input. The methodology showed reasonable convergence in steady flight regime and control angles predictions compared fairly well with test data. In the maneuvering flight regions, the convergence was slower due to relaxation techniques used for the numerical stability. The subsequent computed control angles for the maneuvering flight regions compared well with test data. Further, the enhancement of the rotor inflow computations in the inverse simulation through implementation of a Lagrangian wake model improved the convergence of the coupling methodology.

  15. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    PubMed

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  17. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  18. Computational Foundations of Natural Intelligence

    PubMed Central

    van Gerven, Marcel

    2017-01-01

    New developments in AI and neuroscience are revitalizing the quest to understanding natural intelligence, offering insight about how to equip machines with human-like capabilities. This paper reviews some of the computational principles relevant for understanding natural intelligence and, ultimately, achieving strong AI. After reviewing basic principles, a variety of computational modeling approaches is discussed. Subsequently, I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence. PMID:29375355

  19. PGMS: A Case Study of Collecting PDA-Based Geo-Tagged Malaria-Related Survey Data

    PubMed Central

    Zhou, Ying; Lobo, Neil F.; Wolkon, Adam; Gimnig, John E.; Malishee, Alpha; Stevenson, Jennifer; Sulistyawati; Collins, Frank H.; Madey, Greg

    2014-01-01

    Using mobile devices, such as personal digital assistants (PDAs), smartphones, tablet computers, etc., to electronically collect malaria-related field data is the way for the field questionnaires in the future. This case study seeks to design a generic survey framework PDA-based geo-tagged malaria-related data collection tool (PGMS) that can be used not only for large-scale community-level geo-tagged electronic malaria-related surveys, but also for a wide variety of electronic data collections of other infectious diseases. The framework includes two parts: the database designed for subsequent cross-sectional data analysis and the customized programs for the six study sites (two in Kenya, three in Indonesia, and one in Tanzania). In addition to the framework development, we also present our methods used when configuring and deploying the PDAs to 1) reduce data entry errors, 2) conserve battery power, 3) field install the programs onto dozens of handheld devices, 4) translate electronic questionnaires into local languages, 5) prevent data loss, and 6) transfer data from PDAs to computers for future analysis and storage. Since 2008, PGMS has successfully accomplished quite a few surveys that recorded 10,871 compounds and households, 52,126 persons, and 17,100 bed nets from the six sites. These numbers are still growing. PMID:25048377

  20. Cognitive performance modeling based on general systems performance theory.

    PubMed

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  1. Computational analysis of stochastic heterogeneity in PCR amplification efficiency revealed by single molecule barcoding

    PubMed Central

    Best, Katharine; Oakes, Theres; Heather, James M.; Shawe-Taylor, John; Chain, Benny

    2015-01-01

    The polymerase chain reaction (PCR) is one of the most widely used techniques in molecular biology. In combination with High Throughput Sequencing (HTS), PCR is widely used to quantify transcript abundance for RNA-seq, and in the context of analysis of T and B cell receptor repertoires. In this study, we combine DNA barcoding with HTS to quantify PCR output from individual target molecules. We develop computational tools that simulate both the PCR branching process itself, and the subsequent subsampling which typically occurs during HTS sequencing. We explore the influence of different types of heterogeneity on sequencing output, and compare them to experimental results where the efficiency of amplification is measured by barcodes uniquely identifying each molecule of starting template. Our results demonstrate that the PCR process introduces substantial amplification heterogeneity, independent of primer sequence and bulk experimental conditions. This heterogeneity can be attributed both to inherited differences between different template DNA molecules, and the inherent stochasticity of the PCR process. The results demonstrate that PCR heterogeneity arises even when reaction and substrate conditions are kept as constant as possible, and therefore single molecule barcoding is essential in order to derive reproducible quantitative results from any protocol combining PCR with HTS. PMID:26459131

  2. Prediction of the translocon-mediated membrane insertion free energies of protein sequences.

    PubMed

    Park, Yungki; Helms, Volkhard

    2008-05-15

    Helical membrane proteins (HMPs) play crucial roles in a variety of cellular processes. Unlike water-soluble proteins, HMPs need not only to fold but also get inserted into the membrane to be fully functional. This process of membrane insertion is mediated by the translocon complex. Thus, it is of great interest to develop computational methods for predicting the translocon-mediated membrane insertion free energies of protein sequences. We have developed Membrane Insertion (MINS), a novel sequence-based computational method for predicting the membrane insertion free energies of protein sequences. A benchmark test gives a correlation coefficient of 0.74 between predicted and observed free energies for 357 known cases, which corresponds to a mean unsigned error of 0.41 kcal/mol. These results are significantly better than those obtained by traditional hydropathy analysis. Moreover, the ability of MINS to reasonably predict membrane insertion free energies of protein sequences allows for effective identification of transmembrane (TM) segments. Subsequently, MINS was applied to predict the membrane insertion free energies of 316 TM segments found in known structures. An in-depth analysis of the predicted free energies reveals a number of interesting findings about the biogenesis and structural stability of HMPs. A web server for MINS is available at http://service.bioinformatik.uni-saarland.de/mins

  3. An efficient rhythmic component expression and weighting synthesis strategy for classifying motor imagery EEG in a brain computer interface

    NASA Astrophysics Data System (ADS)

    Wang, Tao; He, Bin

    2004-03-01

    The recognition of mental states during motor imagery tasks is crucial for EEG-based brain computer interface research. We have developed a new algorithm by means of frequency decomposition and weighting synthesis strategy for recognizing imagined right- and left-hand movements. A frequency range from 5 to 25 Hz was divided into 20 band bins for each trial, and the corresponding envelopes of filtered EEG signals for each trial were extracted as a measure of instantaneous power at each frequency band. The dimensionality of the feature space was reduced from 200 (corresponding to 2 s) to 3 by down-sampling of envelopes of the feature signals, and subsequently applying principal component analysis. The linear discriminate analysis algorithm was then used to classify the features, due to its generalization capability. Each frequency band bin was weighted by a function determined according to the classification accuracy during the training process. The present classification algorithm was applied to a dataset of nine human subjects, and achieved a success rate of classification of 90% in training and 77% in testing. The present promising results suggest that the present classification algorithm can be used in initiating a general-purpose mental state recognition based on motor imagery tasks.

  4. MaPLE: A MapReduce Pipeline for Lattice-based Evaluation and Its Application to SNOMED CT

    PubMed Central

    Zhang, Guo-Qiang; Zhu, Wei; Sun, Mengmeng; Tao, Shiqiang; Bodenreider, Olivier; Cui, Licong

    2015-01-01

    Non-lattice fragments are often indicative of structural anomalies in ontological systems and, as such, represent possible areas of focus for subsequent quality assurance work. However, extracting the non-lattice fragments in large ontological systems is computationally expensive if not prohibitive, using a traditional sequential approach. In this paper we present a general MapReduce pipeline, called MaPLE (MapReduce Pipeline for Lattice-based Evaluation), for extracting non-lattice fragments in large partially ordered sets and demonstrate its applicability in ontology quality assurance. Using MaPLE in a 30-node Hadoop local cloud, we systematically extracted non-lattice fragments in 8 SNOMED CT versions from 2009 to 2014 (each containing over 300k concepts), with an average total computing time of less than 3 hours per version. With dramatically reduced time, MaPLE makes it feasible not only to perform exhaustive structural analysis of large ontological hierarchies, but also to systematically track structural changes between versions. Our change analysis showed that the average change rates on the non-lattice pairs are up to 38.6 times higher than the change rates of the background structure (concept nodes). This demonstrates that fragments around non-lattice pairs exhibit significantly higher rates of change in the process of ontological evolution. PMID:25705725

  5. Orthogonal switching of AMS axes during type-2 fold interference: Insights from integrated X-ray computed tomography, AMS and 3D petrography

    NASA Astrophysics Data System (ADS)

    Sayab, Mohammad; Miettinen, Arttu; Aerden, Domingo; Karell, Fredrik

    2017-10-01

    We applied X-ray computed microtomography (μ-CT) in combination with anisotropy of magnetic susceptibility (AMS) analysis to study metamorphic rock fabrics in an oriented drill core sample of pyrite-pyrrhotite-quartz-mica schist. The sample is extracted from the Paleoproterozoic Martimo metasedimentary belt of northern Finland. The μ-CT resolves the spatial distribution, shape and orientation of 25,920 pyrrhotite and 153 pyrite grains localized in mm-thick metapelitic laminae. Together with microstructural analysis, the μ-CT allows us to interpret the prolate symmetry of the AMS ellipsoid and its relationship to the deformation history. AMS of the sample is controlled by pyrrhotite porphyroblasts that grew syntectonically during D1 in subhorizontal microlithons. The short and intermediate axes (K3 and K2) of the AMS ellipsoid interchanged positions during a subsequent deformation (D2) that intensely crenulated S1 and deformed pyrrhotite, while the long axes (K1) maintained a constant position parallel to the maximum stretching direction. However, it is likely that all the three AMS axes switched, similar to the three principal axes of the shape ellipsoid of pyrite porphyroblasts from D1 to D2. The superposition of D1 and D2 produced a type-2 fold interference pattern.

  6. Evaluating the Effectiveness of Flood Control Strategies in Contrasting Urban Watersheds and Implications for Houston's Future Flood Vulnerability

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2016-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  7. MaPLE: A MapReduce Pipeline for Lattice-based Evaluation and Its Application to SNOMED CT.

    PubMed

    Zhang, Guo-Qiang; Zhu, Wei; Sun, Mengmeng; Tao, Shiqiang; Bodenreider, Olivier; Cui, Licong

    2014-10-01

    Non-lattice fragments are often indicative of structural anomalies in ontological systems and, as such, represent possible areas of focus for subsequent quality assurance work. However, extracting the non-lattice fragments in large ontological systems is computationally expensive if not prohibitive, using a traditional sequential approach. In this paper we present a general MapReduce pipeline, called MaPLE (MapReduce Pipeline for Lattice-based Evaluation), for extracting non-lattice fragments in large partially ordered sets and demonstrate its applicability in ontology quality assurance. Using MaPLE in a 30-node Hadoop local cloud, we systematically extracted non-lattice fragments in 8 SNOMED CT versions from 2009 to 2014 (each containing over 300k concepts), with an average total computing time of less than 3 hours per version. With dramatically reduced time, MaPLE makes it feasible not only to perform exhaustive structural analysis of large ontological hierarchies, but also to systematically track structural changes between versions. Our change analysis showed that the average change rates on the non-lattice pairs are up to 38.6 times higher than the change rates of the background structure (concept nodes). This demonstrates that fragments around non-lattice pairs exhibit significantly higher rates of change in the process of ontological evolution.

  8. Understanding ZHENG in traditional Chinese medicine in the context of neuro-endocrine-immune network.

    PubMed

    Li, S; Zhang, Z Q; Wu, L J; Zhang, X G; Li, Y D; Wang, Y Y

    2007-01-01

    Traditional Chinese medicine uses ZHENG as the key pathological principle to understand the human homeostasis and guide the applications of Chinese herbs. Here, a systems biology approach with the combination of computational analysis and animal experiment is used to investigate this complex issue, ZHENG, in the context of the neuro-endocrine-immune (NEI) system. By using the methods of literature mining, network analysis and topological comparison, it is found that hormones are predominant in the Cold ZHENG network, immune factors are predominant in the Hot ZHENG network, and these two networks are connected by neuro-transmitters. In addition, genes related to Hot ZHENG-related diseases are mainly present in the cytokine-cytokine receptor interaction pathway, whereas genes related to both the Cold-related and Hot-related diseases are linked to the neuroactive ligand-receptor interaction pathway. These computational findings were subsequently verified by experiments on a rat model of collagen-induced arthritis, which indicate that the Cold ZHENG-oriented herbs tend to affect the hub nodes in the Cold ZHENG network, and the Hot ZHENG-oriented herbs tend to affect the hub nodes in the Hot ZHENG network. These investigations demonstrate that the thousand-year-old concept of ZHENG may have a molecular basis with NEI as background.

  9. Computational modeling of human head under blast in confined and open spaces: primary blast injury.

    PubMed

    Rezaei, A; Salimi Jazi, M; Karami, G

    2014-01-01

    In this paper, a computational modeling for biomechanical analysis of primary blast injuries is presented. The responses of the brain in terms of mechanical parameters under different blast spaces including open, semi-confined, and confined environments are studied. In the study, the effect of direct and indirect blast waves from the neighboring walls in the confined environments will be taken into consideration. A 50th percentile finite element head model is exposed to blast waves of different intensities. In the open space, the head experiences a sudden intracranial pressure (ICP) change, which vanishes in a matter of a few milliseconds. The situation is similar in semi-confined space, but in the confined space, the reflections from the walls will create a number of subsequent peaks in ICP with a longer duration. The analysis procedure is based on a simultaneous interaction simulation of the deformable head and its components with the blast wave propagations. It is concluded that compared with the open and semi-confined space settings, the walls in the confined space scenario enhance the risk of primary blast injuries considerably because of indirect blast waves transferring a larger amount of damaging energy to the head. Copyright © 2013 John Wiley & Sons, Ltd.

  10. The 1943 K emission spectrum of H216O between 6600 and 7050 cm-1

    NASA Astrophysics Data System (ADS)

    Czinki, Eszter; Furtenbacher, Tibor; Császár, Attila G.; Eckhardt, André K.; Mellau, Georg Ch.

    2018-02-01

    An emission spectrum of H216O has been recorded, with Doppler-limited resolution, at 1943 K using Hot Gas Molecular Emission (HOTGAME) spectroscopy. The wavenumber range covered is 6600 to 7050 cm-1. This work reports the analysis and subsequent assignment of close to 3700 H216O transitions out of a total of more than 6700 measured peaks. The analysis is based on the Measured Active Rotational-Vibrational Energy Levels (MARVEL) energy levels of H216O determined in 2013 and emission line intensities obtained from accurate variational nuclear-motion computations. The analysis of the spectrum yields about 1300 transitions not measured previously and 23 experimentally previously unidentified rovibrational energy levels. The accuracy of the line positions and intensities used in the analysis was improved with the spectrum deconvolution software SyMath via creating a peak list corresponding to the dense emission spectrum. The extensive list of labeled transitions and the new experimental energy levels obtained are deposited in the Supplementary Material of this article as well as in the ReSpecTh (http://www.respecth.hu) information system.

  11. Scheme for Entering Binary Data Into a Quantum Computer

    NASA Technical Reports Server (NTRS)

    Williams, Colin

    2005-01-01

    A quantum algorithm provides for the encoding of an exponentially large number of classical data bits by use of a smaller (polynomially large) number of quantum bits (qubits). The development of this algorithm was prompted by the need, heretofore not satisfied, for a means of entering real-world binary data into a quantum computer. The data format provided by this algorithm is suitable for subsequent ultrafast quantum processing of the entered data. Potential applications lie in disciplines (e.g., genomics) in which one needs to search for matches between parts of very long sequences of data. For example, the algorithm could be used to encode the N-bit-long human genome in only log2N qubits. The resulting log2N-qubit state could then be used for subsequent quantum data processing - for example, to perform rapid comparisons of sequences.

  12. Letter regarding 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics' by Patrizi et al. and research reproducibility.

    PubMed

    2017-04-01

    The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.

  13. Processing Device for High-Speed Execution of an Xrisc Computer Program

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  14. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    PubMed

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.

  15. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter

    PubMed Central

    Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei

    2013-01-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200

  16. Systematic computation with functional gene-sets among leukemic and hematopoietic stem cells reveals a favorable prognostic signature for acute myeloid leukemia.

    PubMed

    Yang, Xinan Holly; Li, Meiyi; Wang, Bin; Zhu, Wanqi; Desgardin, Aurelie; Onel, Kenan; de Jong, Jill; Chen, Jianjun; Chen, Luonan; Cunningham, John M

    2015-03-24

    Genes that regulate stem cell function are suspected to exert adverse effects on prognosis in malignancy. However, diverse cancer stem cell signatures are difficult for physicians to interpret and apply clinically. To connect the transcriptome and stem cell biology, with potential clinical applications, we propose a novel computational "gene-to-function, snapshot-to-dynamics, and biology-to-clinic" framework to uncover core functional gene-sets signatures. This framework incorporates three function-centric gene-set analysis strategies: a meta-analysis of both microarray and RNA-seq data, novel dynamic network mechanism (DNM) identification, and a personalized prognostic indicator analysis. This work uses complex disease acute myeloid leukemia (AML) as a research platform. We introduced an adjustable "soft threshold" to a functional gene-set algorithm and found that two different analysis methods identified distinct gene-set signatures from the same samples. We identified a 30-gene cluster that characterizes leukemic stem cell (LSC)-depleted cells and a 25-gene cluster that characterizes LSC-enriched cells in parallel; both mark favorable-prognosis in AML. Genes within each signature significantly share common biological processes and/or molecular functions (empirical p = 6e-5 and 0.03 respectively). The 25-gene signature reflects the abnormal development of stem cells in AML, such as AURKA over-expression. We subsequently determined that the clinical relevance of both signatures is independent of known clinical risk classifications in 214 patients with cytogenetically normal AML. We successfully validated the prognosis of both signatures in two independent cohorts of 91 and 242 patients respectively (log-rank p < 0.0015 and 0.05; empirical p < 0.015 and 0.08). The proposed algorithms and computational framework will harness systems biology research because they efficiently translate gene-sets (rather than single genes) into biological discoveries about AML and other complex diseases.

  17. Consequence analysis in LPG installation using an integrated computer package.

    PubMed

    Ditali, S; Colombi, M; Moreschini, G; Senni, S

    2000-01-07

    This paper presents the prototype of the computer code, Atlantide, developed to assess the consequences associated with accidental events that can occur in a LPG storage plant. The characteristic of Atlantide is to be simple enough but at the same time adequate to cope with consequence analysis as required by Italian legislation in fulfilling the Seveso Directive. The application of Atlantide is appropriate for LPG storage/transferring installations. The models and correlations implemented in the code are relevant to flashing liquid releases, heavy gas dispersion and other typical phenomena such as BLEVE/Fireball. The computer code allows, on the basis of the operating/design characteristics, the study of the relevant accidental events from the evaluation of the release rate (liquid, gaseous and two-phase) in the unit involved, to the analysis of the subsequent evaporation and dispersion, up to the assessment of the final phenomena of fire and explosion. This is done taking as reference simplified Event Trees which describe the evolution of accidental scenarios, taking into account the most likely meteorological conditions, the different release situations and other features typical of a LPG installation. The limited input data required and the automatic linking between the single models, that are activated in a defined sequence, depending on the accidental event selected, minimize both the time required for the risk analysis and the possibility of errors. Models and equations implemented in Atlantide have been selected from public literature or in-house developed software and tailored with the aim to be easy to use and fast to run but, nevertheless, able to provide realistic simulation of the accidental event as well as reliable results, in terms of physical effects and hazardous areas. The results have been compared with those of other internationally recognized codes and with the criteria adopted by Italian authorities to verify the Safety Reports for LPG installations. A brief of the theoretical basis of each model implemented in Atlantide and an example of application are included in the paper.

  18. Method and apparatus of parallel computing with simultaneously operating stream prefetching and list prefetching engines

    DOEpatents

    Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan

    2012-12-11

    A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.

  19. Predicting the thermal/structural performance of the atmospheric trace molecules spectroscopy /ATMOS/ Fourier transform spectrometer

    NASA Technical Reports Server (NTRS)

    Miller, J. M.

    1980-01-01

    ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.

  20. Another procedure for the preliminary ordering of loci based on two point lod scores.

    PubMed

    Curtis, D

    1994-01-01

    Because of the difficulty of performing full likelihood analysis over multiple loci and the large numbers of possible orders, a number of methods have been proposed for quickly evaluating orders and, to a lesser extent, for generating good orders. A new method is proposed which uses a function which is moderately laborious to compute, the sum of lod scores between all pairs of loci. This function can be smoothly minimized by initially allowing the loci to be placed anywhere in space, and only subsequently constraining them to lie along a one-dimensional map. Application of this approach to sample data suggests that it has promise and might usefully be combined with other methods when loci need to be ordered.

  1. Computer modeling the fatigue crack growth rate behavior of metals in corrosive environments

    NASA Technical Reports Server (NTRS)

    Richey, Edward, III; Wilson, Allen W.; Pope, Jonathan M.; Gangloff, Richard P.

    1994-01-01

    The objective of this task was to develop a method to digitize FCP (fatigue crack propagation) kinetics data, generally presented in terms of extensive da/dN-Delta K pairs, to produce a file for subsequent linear superposition or curve-fitting analysis. The method that was developed is specific to the Numonics 2400 Digitablet and is comparable to commercially available software products as Digimatic(sup TM 4). Experiments demonstrated that the errors introduced by the photocopying of literature data, and digitization, are small compared to those inherent in laboratory methods to characterize FCP in benign and aggressive environments. The digitizing procedure was employed to obtain fifteen crack growth rate data sets for several aerospace alloys in aggressive environments.

  2. Arterial signal timing optimization using PASSER II-87

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, E.C.P.; Messer, C.J.; Garza, R.U.

    1988-11-01

    PASSER is the acronym for the Progression Analysis and Signal System Evaluation Routine. PASSER II was originally developed by the Texas Transportation Institute (TTI) for the Dallas Corridor Project. The Texas State Department of Highways and Public Transportation (SDHPT) has sponsored the subsequent program development on both mainframe computers and microcomputers. The theory, model structure, methodology, and logic of PASSER II have been evaluated and well documented. PASSER II is widely used because of its ability to easily select multiple-phase sequences by adjusting the background cycle length and progression speeds to find the optimal timing plants, such as cycle, greenmore » split, phase sequence, and offsets, that can efficiently maximize the two-way progression bands.« less

  3. Simulation on Natural Convection of a Nanofluid along an Isothermal Inclined Plate

    NASA Astrophysics Data System (ADS)

    Mitra, Asish

    2017-08-01

    A numerical algorithm is presented for studying laminar natural convection flow of a nanofluid along an isothermal inclined plate. By means of similarity transformation, the original nonlinear partial differential equations of flow are transformed to a set of nonlinear ordinary differential equations. Subsequently they are reduced to a first order system and integrated using Newton Raphson and adaptive Runge-Kutta methods. The computer codes are developed for this numerical analysis in Matlab environment. Dimensionless velocity, temperature profiles and nanoparticle concentration for various angles of inclination are illustrated graphically. The effects of Prandtl number, Brownian motion parameter and thermophoresis parameter on Nusselt number are also discussed. The results of the present simulation are then compared with previous one available in literature with good agreement.

  4. Protein expression in Arabidopsis thaliana after chronic clinorotation

    NASA Technical Reports Server (NTRS)

    Piastuch, William C.; Brown, Christopher S.

    1994-01-01

    Soluble protein expression in Arabidopsis thaliana L. (Heynh.) leaf and stem tissue was examined after chronic clinorotation. Seeds of Arabidopsis were germinated and plants grown to maturity on horizontal or vertical slow-rotating clinostats (1 rpm) or in stationary vertical control units. Total soluble proteins and in vivo-labeled soluble proteins isolated from these plants were analyzed by two-dimensional sodium doedocyl sulfate polyacrylamide gel electrophoresis (SDS PAGE) and subsequent fluorography. Visual and computer analysis of the resulting protein patterns showed no significant differences in either total protein expression or in active protein synthesis between horizontal clinorotation and vertical controls in the Arabidopsis leaf and stem tissue. These results show chronic clinorotation does not cause gross changes in protein expression in Arabidopsis.

  5. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  6. Quantum computation for solving linear systems

    NASA Astrophysics Data System (ADS)

    Cao, Yudong

    Quantum computation is a subject born out of the combination between physics and computer science. It studies how the laws of quantum mechanics can be exploited to perform computations much more efficiently than current computers (termed classical computers as oppose to quantum computers). The thesis starts by introducing ideas from quantum physics and theoretical computer science and based on these ideas, introducing the basic concepts in quantum computing. These introductory discussions are intended for non-specialists to obtain the essential knowledge needed for understanding the new results presented in the subsequent chapters. After introducing the basics of quantum computing, we focus on the recently proposed quantum algorithm for linear systems. The new results include i) special instances of quantum circuits that can be implemented using current experimental resources; ii) detailed quantum algorithms that are suitable for a broader class of linear systems. We show that for some particular problems the quantum algorithm is able to achieve exponential speedup over their classical counterparts.

  7. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  8. Experience in the management of ECMO therapy as a mortality risk factor.

    PubMed

    Guilló Moreno, V; Gutiérrez Martínez, A; Romero Berrocal, A; Sánchez Castilla, M; García-Fernández, J

    2018-02-01

    The extracorporeal oxygenation membrane (ECMO) is a system that provides circulatory and respiratory assistance to patients in cardiac or respiratory failure refractory to conventional treatment. It is a therapy with numerous associated complications and high mortality. Multidisciplinary management and experienced teams increase survival. Our purpose is to evaluate and analyse the effect of the learning curve on mortality. Retrospective and observational study of 31 patients, from January 2012 to December 2015. Patients were separated into 2periods. These periods were divided by the establishment of an ECMO protocol. We compared the quantitative variables by performing the Mann-Whitney U test. For the categorical qualitative variables we performed the chi-square test or Fisher exact statistic as appropriate. The survival curve was computed using the Kaplan-Meier method, and the analysis of statistical significance using the Log-rank test. Data analysis was performed with the STATA programme 14. Survival curves show the tendency to lower mortality in the subsequent period (P=0.0601). The overall mortality rate in the initial period was higher than in the subsequent period (P=0.042). In another analysis, we compared the characteristics of the 2groups and concluded that they were homogeneous. The degree of experience is an independent factor for mortality. The application of a care protocol is fundamental to facilitate the management of ECMO therapy. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Orientation During Initial Learning and Subsequent Discrimination of Faces

    NASA Technical Reports Server (NTRS)

    Cohen, Malcolm M.; Holton, Emily M. (Technical Monitor)

    1997-01-01

    Discrimination of facial features degrades with stimulus rotation (e.g., the "Margaret Thatcher" effect). Thirty-two observers learned to discriminate between two upright, or two inverted, faces. Images, erect and rotated by +/-45deg, +/-90deg, +/-135deg and 180deg about the line of sight, were presented on a computer screen. Initial discriminative reaction times increased with stimulus rotation only for observers who learned the upright faces. Orientation during learning is critical in identifying faces subsequently seen at different orientations.

  10. How Insects Initiate Flight: Computational Analysis of a Damselfly in Takeoff Flight

    NASA Astrophysics Data System (ADS)

    Bode-Oke, Ayodeji; Zeyghami, Samane; Dong, Haibo; Flow Simulation Research Group Team

    2017-11-01

    Flight initiation is essential for survival in biological fliers and can be classified into jumping and non-jumping takeoffs. During jumping takeoffs, the legs generate most of the initial impulse. Whereas the wings generate most of the forces in non-jumping takeoffs, which are usually voluntary, slow, and stable. It is of interest to understand how non-jumping takeoffs occur and what strategies insects use to generate the required forces. Using a high fidelity computational fluid dynamics simulation, we identify the flow features and compute the wing aerodynamic forces to elucidate how flight forces are generated by a damselfly performing a non-jumping takeoff. Our results show that a damselfly generates about three times its bodyweight during the first half-stroke for liftoff while flapping through a steeply inclined stroke plane and slicing the air at high angles of attack. Consequently, a Leading Edge Vortex (LEV) is formed during both the downstroke and upstroke on all the four wings. The formation of the LEV, however, is inhibited in the subsequent upstrokes following takeoff. Accordingly, we observe a drastic reduction in the magnitude of the aerodynamic force, signifying the importance of LEV in augmenting force production. This work was supported by National Science Foundation [CBET-1313217] and Air Force Research Laboratory [FA9550-12-1-007].

  11. Prediction of resource volumes at untested locations using simple local prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  12. Exploratory analysis regarding the domain definitions for computer based analytical models

    NASA Astrophysics Data System (ADS)

    Raicu, A.; Oanta, E.; Barhalescu, M.

    2017-08-01

    Our previous computer based studies dedicated to structural problems using analytical methods defined the composite cross section of a beam as a result of Boolean operations with so-called ‘simple’ shapes. Using generalisations, in the class of the ‘simple’ shapes were included areas bounded by curves approximated using spline functions and areas approximated as polygons. However, particular definitions lead to particular solutions. In order to ascend above the actual limitations, we conceived a general definition of the cross sections that are considered now calculus domains consisting of several subdomains. The according set of input data use complex parameterizations. This new vision allows us to naturally assign a general number of attributes to the subdomains. In this way there may be modelled new phenomena that use map-wise information, such as the metal alloys equilibrium diagrams. The hierarchy of the input data text files that use the comma-separated-value format and their structure are also presented and discussed in the paper. This new approach allows us to reuse the concepts and part of the data processing software instruments already developed. The according software to be subsequently developed will be modularised and generalised in order to be used in the upcoming projects that require rapid development of computer based models.

  13. Efficient electronic structure theory via hierarchical scale-adaptive coupled-cluster formalism: I. Theory and computational complexity analysis

    NASA Astrophysics Data System (ADS)

    Lyakh, Dmitry I.

    2018-03-01

    A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.

  14. Collaborative Brain-Computer Interface for Aiding Decision-Making

    PubMed Central

    Poli, Riccardo; Valeriani, Davide; Cinel, Caterina

    2014-01-01

    We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making. PMID:25072739

  15. Non-invasive pulmonary blood flow analysis and blood pressure mapping derived from 4D flow MRI

    NASA Astrophysics Data System (ADS)

    Delles, Michael; Rengier, Fabian; Azad, Yoo-Jin; Bodenstedt, Sebastian; von Tengg-Kobligk, Hendrik; Ley, Sebastian; Unterhinninghofen, Roland; Kauczor, Hans-Ulrich; Dillmann, Rüdiger

    2015-03-01

    In diagnostics and therapy control of cardiovascular diseases, detailed knowledge about the patient-specific behavior of blood flow and pressure can be essential. The only method capable of measuring complete time-resolved three-dimensional vector fields of the blood flow velocities is velocity-encoded magnetic resonance imaging (MRI), often denoted as 4D flow MRI. Furthermore, relative pressure maps can be computed from this data source, as presented by different groups in recent years. Hence, analysis of blood flow and pressure using 4D flow MRI can be a valuable technique in management of cardiovascular diseases. In order to perform these tasks, all necessary steps in the corresponding process chain can be carried out in our in-house developed software framework MEDIFRAME. In this article, we apply MEDIFRAME for a study of hemodynamics in the pulmonary arteries of five healthy volunteers. The study included measuring vector fields of blood flow velocities by phase-contrast MRI and subsequently computing relative blood pressure maps. We visualized blood flow by streamline depictions and computed characteristic values for the left and the right pulmonary artery (LPA and RPA). In all volunteers, we observed a lower amount of blood flow in the LPA compared to the RPA. Furthermore, we visualized blood pressure maps using volume rendering and generated graphs of pressure differences between the LPA, the RPA and the main pulmonary artery. In most volunteers, blood pressure was increased near to the bifurcation and in the proximal LPA, leading to higher average pressure values in the LPA compared to the RPA.

  16. Roles of universal three-dimensional image analysis devices that assist surgical operations.

    PubMed

    Sakamoto, Tsuyoshi

    2014-04-01

    The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  17. Vertebral Artery Dissection Causing Stroke After Trampoline Use.

    PubMed

    Casserly, Courtney S; Lim, Rodrick K; Prasad, Asuri Narayan

    2015-11-01

    The aim of this study was to report a case of a 4-year-old boy who had been playing on the trampoline and presented to the emergency department (ED) with vomiting and ataxia, and had a vertebral artery dissection with subsequent posterior circulation infarcts. This study is a chart review. The patient presented to the emergency department with a 4-day history of vomiting and gait unsteadiness. A computed tomography scan of his head revealed multiple left cerebellar infarcts. Subsequent magnetic resonance imaging/magnetic resonance angiogram of his head and neck demonstrated multiple infarcts involving the left cerebellum, bilateral thalami, and left occipital lobe. A computed tomography angiogram confirmed the presence of a left vertebral artery dissection. Vertebral artery dissection is a relatively common cause of stroke in the pediatric age group. Trampoline use has been associated with significant risk of injury to the head and neck. Patients who are small and/or young are most at risk. In this case, minor trauma secondary to trampoline use could be a possible mechanism for vertebral artery dissection and subsequent strokes. The association in this case warrants careful consideration because trampoline use could pose a significant risk to pediatric users.

  18. Resolution, sensitivity, and in vivo application of high-resolution computed tomography for titanium-coated polymethyl methacrylate (PMMA) dental implants.

    PubMed

    Cuijpers, Vincent M J I; Jaroszewicz, Jacub; Anil, Sukumaran; Al Farraj Aldosari, Abdullah; Walboomers, X Frank; Jansen, John A

    2014-03-01

    The aims of this study were (i) to determine the spatial resolution and sensitivity of micro- versus nano-computed tomography (CT) techniques and (ii) to validate micro- versus nano-CT in a dog dental implant model, comparative to histological analysis. To determine spatial resolution and sensitivity, standardized reference samples containing standardized nano- and microspheres were prepared in polymer and ceramic matrices. Thereafter, 10 titanium-coated polymer dental implants (3.2 mm in Ø by 4 mm in length) were placed in the mandible of Beagle dogs. Both micro- and nano-CT, as well as histological analyses, were performed. The reference samples confirmed the high resolution of the nano-CT system, which was capable of revealing sub-micron structures embedded in radiodense matrices. The dog implantation study and subsequent statistical analysis showed equal values for bone area and bone-implant contact measurements between micro-CT and histology. However, because of the limited sample size and field of view, nano-CT was not rendering reliable data representative of the entire bone-implant specimen. Micro-CT analysis is an efficient tool to quantitate bone healing parameters at the bone-implant interface, especially when using titanium-coated PMMA implants. Nano-CT is not suitable for such quantification, but reveals complementary morphological information rivaling histology, yet with the advantage of a 3D visualization. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  19. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    NASA Technical Reports Server (NTRS)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  20. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    PubMed Central

    Hallgren, Kevin A.

    2012-01-01

    Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776

  1. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records.

    PubMed

    Duz, Marco; Marshall, John F; Parkin, Tim

    2017-06-29

    The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. ©Marco Duz, John F Marshall, Tim Parkin. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 29.06.2017.

  2. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records

    PubMed Central

    Marshall, John F; Parkin, Tim

    2017-01-01

    Background The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. Objective The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. Methods The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Results Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. Conclusions The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. PMID:28663163

  3. Nordic Sea Level - Analysis of PSMSL RLR Tide Gauge data

    NASA Astrophysics Data System (ADS)

    Knudsen, Per; Andersen, Ole

    2015-04-01

    Tide gauge data from the Nordic region covering a period of time from 1920 to 2000 are evaluated. 63 stations having RLR data for at least 40 years have been used. Each tide gauge data record was averaged to annual averages after the monthly average seasonal anomalies were removed. Some stations lack data, especially before around 1950. Hence, to compute representative sea level trends for the 1920-2000 period a procedure for filling in estimated sea level values in the voids, is needed. To fill in voids in the tide gauge data records a reconstruction method was applied that utilizes EOF.s in an iterative manner. Subsequently the trends were computed. The estimated trends range from about -8 mm/year to 2 mm/year reflecting both post-glacial uplift and sea level rise. An evaluation of the first EOFs show that the first EOF clearly describes the trends in the time series. EOF #2 and #3 describe differences in the inter-annual sea level variability with-in the Baltic Sea and differences between the Baltic and the North Atlantic / Norwegian seas, respectively.

  4. IEEE International Symposium on Biomedical Imaging.

    PubMed

    2017-01-01

    The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.

  5. Misdiagnosis of acute peripheral vestibulopathy in central nervous ischemic infarction.

    PubMed

    Braun, Eva Maria; Tomazic, Peter Valentin; Ropposch, Thorsten; Nemetz, Ulrike; Lackner, Andreas; Walch, Christian

    2011-12-01

    Vertigo is a very common symptom at otorhinolaryngology (ENT), neurological, and emergency units, but often, it is difficult to distinguish between vertigo of peripheral and central origin. We conducted a retrospective analysis of a hospital database, including all patients admitted to the ENT University Hospital Graz after neurological examination, with a diagnosis of peripheral vestibular vertigo and subsequent diagnosis of central nervous infarction as the actual cause for the vertigo. Twelve patients were included in this study. All patients with acute spinning vertigo after a thorough neurological examination and with uneventful computed tomographic scans were referred to our ENT department. Nine of them presented with horizontal nystagmus. Only 1 woman experienced additional hearing loss. The mean diagnostic delay to the definite diagnosis of a central infarction through magnetic resonance imaging was 4 days (SD, 2.3 d). A careful otologic and neurological examination, including the head impulse test and caloric testing, is mandatory. Because ischemic events cannot be diagnosed in computed tomographic scans at an early stage, we strongly recommend to perform cranial magnetic resonance imaging within 48 hours from admission if vertigo has not improved under conservative treatment.

  6. Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach

    PubMed Central

    Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan

    2017-01-01

    In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case. PMID:28609471

  7. Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach.

    PubMed

    Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan

    2017-01-01

    In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case.

  8. Vander Lugt correlation of DNA sequence data

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Hawk, James F.; Martin, James C.

    1990-12-01

    DNA, the molecule containing the genetic code of an organism, is a linear chain of subunits. It is the sequence of subunits, of which there are four kinds, that constitutes the unique blueprint of an individual. This sequence is the focus of a large number of analyses performed by an army of geneticists, biologists, and computer scientists. Most of these analyses entail searches for specific subsequences within the larger set of sequence data. Thus, most analyses are essentially pattern recognition or correlation tasks. Yet, there are special features to such analysis that influence the strategy and methods of an optical pattern recognition approach. While the serial processing employed in digital electronic computers remains the main engine of sequence analyses, there is no fundamental reason that more efficient parallel methods cannot be used. We describe an approach using optical pattern recognition (OPR) techniques based on matched spatial filtering. This allows parallel comparison of large blocks of sequence data. In this study we have simulated a Vander Lugt1 architecture implementing our approach. Searches for specific target sequence strings within a block of DNA sequence from the Co/El plasmid2 are performed.

  9. Self-motion perception: assessment by real-time computer-generated animations

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Phillips, J. O.

    2001-01-01

    We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.

  10. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  11. Reduced ventilation-perfusion (V/Q) mismatch following endobronchial valve insertion demonstrated by Gallium-68 V/Q photon emission tomography/computed tomography.

    PubMed

    Leong, Paul; Le Roux, Pierre-Yves; Callahan, Jason; Siva, Shankar; Hofman, Michael S; Steinfort, Daniel P

    2017-09-01

    Endobronchial valves (EBVs) are increasingly deployed in the management of severe emphysema. Initial studies focussed on volume reduction as the mechanism, with subsequent improvement in forced expiratory volume in 1 s (FEV 1 ). More recent studies have emphasized importance of perfusion on predicting outcomes, though findings have been inconsistent. Gallium-68 ventilation-perfusion (V/Q) photon emission tomography (PET)/computed tomography (CT) is a novel imaging modality with advantages in spatial resolution, quantitation, and speed over conventional V/Q scintigraphy. We report a pilot case in which V/Q-PET/CT demonstrated discordant findings compared with quantitative CT analysis, and directed left lower lobe EBV placement. The patient experienced a significant improvement in 6-min walk distance (6MWD) without change in spirometry. Post-EBV V/Q-PET/CT demonstrated a marked decrease in unmatched (detrimental) V/Q areas and improvement in overall V/Q matching on post-EBV V/Q-PET/CT. These preliminary novel findings suggest that EBVs improve V/Q matching and may explain the observed functional improvements.

  12. Progress in Earth System Modeling since the ENIAC Calculation

    NASA Astrophysics Data System (ADS)

    Fung, I.

    2009-05-01

    The success of the first numerical weather prediction experiment on the ENIAC computer in 1950 was hinged on the expansion of the meteorological observing network, which led to theoretical advances in atmospheric dynamics and subsequently the implementation of the simplified equations on the computer. This paper briefly reviews the progress in Earth System Modeling and climate observations, and suggests a strategy to sustain and expand the observations needed to advance climate science and prediction.

  13. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  14. Indium-111 WBC detection of emphysematous gastritis in pancreatitis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruana, V.; Swayne, L.C.; Salaki, J.S.

    1990-01-01

    We present a case of emphysematous gastritis initially detected with 111In oxine-labeled white blood cell scintigraphy and subsequently confirmed by computed tomography. Early aggressive antibiotic and supportive therapy resulted in a successful clinical outcome.

  15. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  16. cloudPEST - A python module for cloud-computing deployment of PEST, a program for parameter estimation

    USGS Publications Warehouse

    Fienen, Michael N.; Kunicki, Thomas C.; Kester, Daniel E.

    2011-01-01

    This report documents cloudPEST-a Python module with functions to facilitate deployment of the model-independent parameter estimation code PEST on a cloud-computing environment. cloudPEST makes use of low-level, freely available command-line tools that interface with the Amazon Elastic Compute Cloud (EC2(TradeMark)) that are unlikely to change dramatically. This report describes the preliminary setup for both Python and EC2 tools and subsequently describes the functions themselves. The code and guidelines have been tested primarily on the Windows(Registered) operating system but are extensible to Linux(Registered).

  17. Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes

    NASA Astrophysics Data System (ADS)

    Bachlechner, Martina E.

    2009-03-01

    Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.

  18. Computer-aided prediction of xenobiotic metabolism in the human body

    NASA Astrophysics Data System (ADS)

    Bezhentsev, V. M.; Tarasova, O. A.; Dmitriev, A. V.; Rudik, A. V.; Lagunin, A. A.; Filimonov, D. A.; Poroikov, V. V.

    2016-08-01

    The review describes the major databases containing information about the metabolism of xenobiotics, including data on drug metabolism, metabolic enzymes, schemes of biotransformation and the structures of some substrates and metabolites. Computational approaches used to predict the interaction of xenobiotics with metabolic enzymes, prediction of metabolic sites in the molecule, generation of structures of potential metabolites for subsequent evaluation of their properties are considered. The advantages and limitations of various computational methods for metabolism prediction and the prospects for their applications to improve the safety and efficacy of new drugs are discussed. Bibliography — 165 references.

  19. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    NASA Astrophysics Data System (ADS)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.

  20. An Investigation of the Morphology of the Petrotympanic Fissure Using Cone-Beam Computed Tomography

    PubMed Central

    Syriopoulos, Konstantinos; Sens, Rogier L.; Politis, Constantinus

    2018-01-01

    ABSTRACT Objectives The purpose of the present study was: a) to examine the visibility and morphology of the petrotympanic fissure on cone-beam computed tomography images, and b) to investigate whether the petrotympanic fissure morphology is significantly affected by gender and age, or not. Material and Methods Using Newtom VGi (QR Verona, Italy), 106 cone-beam computed tomography examinations (212 temporomandibular joint areas) of both genders were retrospectively and randomly selected. Two observers examined the images and subsequently classified by consensus the petrotympanic fissure morphology into the following three types: type 1 - widely open; type 2 - narrow middle; type 3 - very narrow/closed. Results The petrotympanic fissure morphology was assessed as type 1, type 2, and type 3 in 85 (40.1%), 72 (34.0%), and 55 (25.9%) cases, respectively. No significant difference was found between left and right petrotympanic fissure morphology (Kappa = 0.37; P < 0.001). Furthermore, no significant difference was found between genders, specifically P = 0.264 and P = 0.211 for the right and left petrotympanic fissure morphology, respectively. However, the ordinal logistic regression analysis showed that males tend to have narrower petrotympanic fissures, in particular OR = 1.58 for right and OR = 1.5 for left petrotympanic fissure. Conclusions The current study lends support to the conclusion that an enhanced multi-planar cone-beam computed tomography yields a clear depiction of the petrotympanic fissure's morphological characteristics. We have found that the morphology is neither gender nor age-related. PMID:29707183

  1. An Investigation of the Morphology of the Petrotympanic Fissure Using Cone-Beam Computed Tomography.

    PubMed

    Damaskos, Spyros; Syriopoulos, Konstantinos; Sens, Rogier L; Politis, Constantinus

    2018-01-01

    The purpose of the present study was: a) to examine the visibility and morphology of the petrotympanic fissure on cone-beam computed tomography images, and b) to investigate whether the petrotympanic fissure morphology is significantly affected by gender and age, or not. Using Newtom VGi (QR Verona, Italy), 106 cone-beam computed tomography examinations (212 temporomandibular joint areas) of both genders were retrospectively and randomly selected. Two observers examined the images and subsequently classified by consensus the petrotympanic fissure morphology into the following three types: type 1 - widely open; type 2 - narrow middle; type 3 - very narrow/closed. The petrotympanic fissure morphology was assessed as type 1, type 2, and type 3 in 85 (40.1%), 72 (34.0%), and 55 (25.9%) cases, respectively. No significant difference was found between left and right petrotympanic fissure morphology (Kappa = 0.37; P < 0.001). Furthermore, no significant difference was found between genders, specifically P = 0.264 and P = 0.211 for the right and left petrotympanic fissure morphology, respectively. However, the ordinal logistic regression analysis showed that males tend to have narrower petrotympanic fissures, in particular OR = 1.58 for right and OR = 1.5 for left petrotympanic fissure. The current study lends support to the conclusion that an enhanced multi-planar cone-beam computed tomography yields a clear depiction of the petrotympanic fissure's morphological characteristics. We have found that the morphology is neither gender nor age-related.

  2. Dopaminergic inputs in the dentate gyrus direct the choice of memory encoding.

    PubMed

    Du, Huiyun; Deng, Wei; Aimone, James B; Ge, Minyan; Parylak, Sarah; Walch, Keenan; Zhang, Wei; Cook, Jonathan; Song, Huina; Wang, Liping; Gage, Fred H; Mu, Yangling

    2016-09-13

    Rewarding experiences are often well remembered, and such memory formation is known to be dependent on dopamine modulation of the neural substrates engaged in learning and memory; however, it is unknown how and where in the brain dopamine signals bias episodic memory toward preceding rather than subsequent events. Here we found that photostimulation of channelrhodopsin-2-expressing dopaminergic fibers in the dentate gyrus induced a long-term depression of cortical inputs, diminished theta oscillations, and impaired subsequent contextual learning. Computational modeling based on this dopamine modulation indicated an asymmetric association of events occurring before and after reward in memory tasks. In subsequent behavioral experiments, preexposure to a natural reward suppressed hippocampus-dependent memory formation, with an effective time window consistent with the duration of dopamine-induced changes of dentate activity. Overall, our results suggest a mechanism by which dopamine enables the hippocampus to encode memory with reduced interference from subsequent experience.

  3. Dopaminergic inputs in the dentate gyrus direct the choice of memory encoding

    PubMed Central

    Du, Huiyun; Deng, Wei; Aimone, James B.; Ge, Minyan; Parylak, Sarah; Walch, Keenan; Zhang, Wei; Cook, Jonathan; Song, Huina; Wang, Liping; Gage, Fred H.; Mu, Yangling

    2016-01-01

    Rewarding experiences are often well remembered, and such memory formation is known to be dependent on dopamine modulation of the neural substrates engaged in learning and memory; however, it is unknown how and where in the brain dopamine signals bias episodic memory toward preceding rather than subsequent events. Here we found that photostimulation of channelrhodopsin-2–expressing dopaminergic fibers in the dentate gyrus induced a long-term depression of cortical inputs, diminished theta oscillations, and impaired subsequent contextual learning. Computational modeling based on this dopamine modulation indicated an asymmetric association of events occurring before and after reward in memory tasks. In subsequent behavioral experiments, preexposure to a natural reward suppressed hippocampus-dependent memory formation, with an effective time window consistent with the duration of dopamine-induced changes of dentate activity. Overall, our results suggest a mechanism by which dopamine enables the hippocampus to encode memory with reduced interference from subsequent experience. PMID:27573822

  4. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    NASA Technical Reports Server (NTRS)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  5. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    NASA Technical Reports Server (NTRS)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  6. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.

  7. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.

  8. A virtual surgical training system that simulates cutting of soft tissue using a modified pre-computed elastic model.

    PubMed

    Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen

    2015-08-01

    This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.

  9. Molecular Predictors of 3D Morphogenesis by Breast Cancer Cell Lines in 3D Culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ju; Chang, Hang; Giricz, Orsi

    Correlative analysis of molecular markers with phenotypic signatures is the simplest model for hypothesis generation. In this paper, a panel of 24 breast cell lines was grown in 3D culture, their morphology was imaged through phase contrast microscopy, and computational methods were developed to segment and represent each colony at multiple dimensions. Subsequently, subpopulations from these morphological responses were identified through consensus clustering to reveal three clusters of round, grape-like, and stellate phenotypes. In some cases, cell lines with particular pathobiological phenotypes clustered together (e.g., ERBB2 amplified cell lines sharing the same morphometric properties as the grape-like phenotype). Next, associationsmore » with molecular features were realized through (i) differential analysis within each morphological cluster, and (ii) regression analysis across the entire panel of cell lines. In both cases, the dominant genes that are predictive of the morphological signatures were identified. Specifically, PPAR? has been associated with the invasive stellate morphological phenotype, which corresponds to triple-negative pathobiology. PPAR? has been validated through two supporting biological assays.« less

  10. Combination of multiple model population analysis and mid-infrared technology for the estimation of copper content in Tegillarca granosa

    NASA Astrophysics Data System (ADS)

    Hu, Meng-Han; Chen, Xiao-Jing; Ye, Peng-Chao; Chen, Xi; Shi, Yi-Jian; Zhai, Guang-Tao; Yang, Xiao-Kang

    2016-11-01

    The aim of this study was to use mid-infrared spectroscopy coupled with multiple model population analysis based on Monte Carlo-uninformative variable elimination for rapidly estimating the copper content of Tegillarca granosa. Copper-specific wavelengths were first extracted from the whole spectra, and subsequently, a least square-support vector machine was used to develop the prediction models. Compared with the prediction model based on full wavelengths, models that used 100 multiple MC-UVE selected wavelengths without and with bin operation showed comparable performances with Rp (root mean square error of Prediction) of 0.97 (14.60 mg/kg) and 0.94 (20.85 mg/kg) versus 0.96 (17.27 mg/kg), as well as ratio of percent deviation (number of wavelength) of 2.77 (407) and 1.84 (45) versus 2.32 (1762). The obtained results demonstrated that the mid-infrared technique could be used for estimating copper content in T. granosa. In addition, the proposed multiple model population analysis can eliminate uninformative, weakly informative and interfering wavelengths effectively, that substantially reduced the model complexity and computation time.

  11. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE PAGES

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe; ...

    2017-04-07

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  12. Temperature induced complementary switching in titanium oxide resistive random access memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, D., E-mail: dpanda@nist.edu; Department of Electronics Engineering and Institute of Electronics, National Chiao Tung University, Hsinchu 30010, Taiwan; Simanjuntak, F. M.

    2016-07-15

    On the way towards high memory density and computer performance, a considerable development in energy efficiency represents the foremost aspiration in future information technology. Complementary resistive switch consists of two antiserial resistive switching memory (RRAM) elements and allows for the construction of large passive crossbar arrays by solving the sneak path problem in combination with a drastic reduction of the power consumption. Here we present a titanium oxide based complementary RRAM (CRRAM) device with Pt top and TiN bottom electrode. A subsequent post metal annealing at 400°C induces CRRAM. Forming voltage of 4.3 V is required for this device tomore » initiate switching process. The same device also exhibiting bipolar switching at lower compliance current, Ic <50 μA. The CRRAM device have high reliabilities. Formation of intermediate titanium oxi-nitride layer is confirmed from the cross-sectional HRTEM analysis. The origin of complementary switching mechanism have been discussed with AES, HRTEM analysis and schematic diagram. This paper provides valuable data along with analysis on the origin of CRRAM for the application in nanoscale devices.« less

  13. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  14. Strong wave/mean-flow coupling in baroclinic acoustic streaming

    NASA Astrophysics Data System (ADS)

    Chini, Greg; Michel, Guillaume

    2017-11-01

    Recently, Chini et al. demonstrated the potential for large-amplitude acoustic streaming in compressible channel flows subjected to strong background cross-channel density variations. In contrast with classic Rayleigh streaming, standing acoustic waves of O (ɛ) amplitude acquire vorticity owing to baroclinic torques acting throughout the domain rather than via viscous torques acting in Stokes boundary layers. More significantly, these baroclinically-driven streaming flows have a magnitude that also is O (ɛ) , i.e. comparable to that of the sound waves. In the present study, the consequent potential for fully two-way coupling between the waves and streaming flows is investigated using a novel WKBJ analysis. The analysis confirms that the wave-driven streaming flows are sufficiently strong to modify the background density gradient, thereby modifying the leading-order acoustic wave structure. Simulations of the wave/mean-flow system enabled by the WKBJ analysis are performed to illustrate the nature of the two-way coupling, which contrasts sharply with classic Rayleigh streaming, for which the waves can first be determined and the streaming flows subsequently computed.

  15. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  16. Persistent medial foot pain in an adolescent athlete.

    PubMed

    Hensley, Craig P; Reischl, Stephen F

    2013-03-01

    The patient was a 15-year-old adolescent male who was referred to a physical therapist for a chief complaint of worsening right medial foot pain. Given the worsening nature of the patient's right medial foot pain, palpatory findings, and a prior recommendation for computed tomography from a radiologist, the patient was referred to his physician. Subsequent computed tomography imaging of the right foot revealed a nondisplaced fracture through the dorsal-medial aspect of the navicular.

  17. Unstructured mesh generation and adaptivity

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  18. Feasibility study of automatic control of crew comfort in the shuttle Extravehicular Mobility Unit. [liquid cooled garment regulator

    NASA Technical Reports Server (NTRS)

    Cook, D. W.

    1977-01-01

    Computer simulation is used to demonstrate that crewman comfort can be assured by using automatic control of the inlet temperature of the coolant into the liquid cooled garment when input to the controller consists of measurements of the garment inlet temperature and the garment outlet temperature difference. Subsequent tests using a facsimile of the control logic developed in the computer program confirmed the feasibility of such a design scheme.

  19. A numerical study of incompressible juncture flows

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Rogers, S. E.; Kaul, U. K.; Chang, J. L. C.

    1986-01-01

    The laminar, steady juncture flow around single or multiple posts mounted between two flat plates is simulated using the three dimensional incompressible Navier-Stokes code, INS3D. The three dimensional separation of the boundary layer and subsequent formation and development of the horseshoe vortex is computed. The computed flow compares favorably with the experimental observation. The recent numerical study to understand and quantify the juncture flow relevant to the Space Shuttle main engine power head is summarized.

  20. Structural analysis of a frangible nut used on the NASA Space Shuttle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metzinger, K.E.

    A structural analysis methodology has been developed for the NASA 2.5-inch frangible nut used on the Space Shuttle. Two of these nuts are used to secure the External Tank to the aft end of the Orbiter. Both nuts must completely fracture before the Orbiter can safely separate from the External Tank. Ideally, only one of the two explosive boosters contained in each nut must detonate to completely break a nut. However, after an uncontrolled change in the Inconel 718 material processing, recent tests indicate that in certain circumstances both boosters may be required. This report details the material characterization andmore » subsequent structural analyses of nuts manufactured from two lots of Inconel 718. The nuts from the HSX lot were observed to consistently separate with only one booster, while the nuts from the HBT lot never completely fracture with a single booster. The material characterization requires only tensile test data and the determination of a tearing parameter based on a computer simulation of a tensile test. Subsequent structural analyses using the PRONTO2D finite element code correctly predict the differing response of nuts fabricated from these two lots. This agreement is important because it demonstrates that this technique can be used to screen lots of Inconel 718 before manufacturing frangible nuts from them. To put this new capability to practice, Sandia personnel have transferred this technology to the Pyrotechnics Group at NASA-JSC.« less

  1. Applying a CAD-generated imaging marker to assess short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin

    2018-02-01

    Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (p<0.01). Thus, the study showed that CAD-generated false-positives might provide a new quantitative imaging marker to help assess short-term breast cancer risk.

  2. Inconsistency in 9 mm bullets: correlation of jacket thickness to post-impact geometry measured with non-destructive X-ray computed tomography.

    PubMed

    Thornby, John; Landheer, Dirk; Williams, Tim; Barnes-Warden, Jane; Fenne, Paul; Norman, Danielle G; Attridge, Alex; Williams, Mark A

    2014-01-01

    Fundamental to any ballistic armour standard is the reference projectile to be defeated. Typically, for certification purposes, a consistent and symmetrical bullet geometry is assumed, however variations in bullet jacket dimensions can have far reaching consequences. Traditionally, characteristics and internal dimensions have been analysed by physically sectioning bullets--an approach which is of restricted scope and which precludes subsequent ballistic assessment. The use of a non-destructive X-ray computed tomography (CT) method has been demonstrated and validated (Kumar et al., 2011 [15]); the authors now apply this technique to correlate bullet impact response with jacket thickness variations. A set of 20 bullets (9 mm DM11) were selected for comparison and an image-based analysis method was employed to map jacket thickness and determine the centre of gravity of each specimen. Both intra- and inter-bullet variations were investigated, with thickness variations of the order of 200 μm commonly found along the length of all bullets and angular variations of up to 50 μm in some. The bullets were subsequently impacted against a rigid flat plate under controlled conditions (observed on a high-speed video camera) and the resulting deformed projectiles were re-analysed. The results of the experiments demonstrate a marked difference in ballistic performance between bullets from different manufacturers and an asymmetric thinning of the jacket is observed in regions of pre-impact weakness. The conclusions are relevant for future soft armour standards and provide important quantitative data for numerical model correlation and development. The implications of the findings of the work on the reliability and repeatability of the industry standard V50 ballistic test are also discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Endothelialization of over- and undersized flow-diverter stents at covered vessel side branches: An in vivo and in silico study.

    PubMed

    Berg, Philipp; Iosif, Christina; Ponsonnard, Sebastien; Yardin, Catherine; Janiga, Gábor; Mounayer, Charbel

    2016-01-04

    Although flow-diverting devices are promising treatment options for intracranial aneurysms, jailed side branches might occlude leading to insufficient blood supply. Especially differences in the local stent strut compression may have a drastic influence on subsequent endothelialization. To investigate the outcome of different treatment scenarios, over- and undersized stent deployments were realized experimentally and computationally. Two Pipeline Embolization Devices were placed in the right common carotid artery of large white swine, crossing the right ascending pharyngeal artery. DSA and PC-MRI measurements were acquired pre- and post-stenting and after three months. To evaluate the stent strut endothelialization and the corresponding ostium patency, the swine were sacrificed and scanning electron microscopy measurements were carried out. A more detailed analysis of the near-stent hemodynamics was enabled by a realistic virtual stenting in combination with highly resolved Computational Fluid Dynamics simulations using case-specific boundary conditions. The oversizing resulted in an elongated stent deployment with more open stent pores, while for the undersized case a shorter deployment with more condensed pores was present. In consequence, the side branch of the first case remained patent after three months and the latter almost fully occluded. The virtual investigation confirmed the experimental findings by identifying differences between the individual velocities as well as stent shear stresses at the distal part of the ostia. The choice of flow-diverting device and the subsequent deployment strategy strongly influences the patency of jailed side branches. Therefore, careful treatment planning is required, to guarantee sufficient blood supply in the brain territories supplied those branches. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. PAI-OFF: A new proposal for online flood forecasting in flash flood prone catchments

    NASA Astrophysics Data System (ADS)

    Schmitz, G. H.; Cullmann, J.

    2008-10-01

    SummaryThe Process Modelling and Artificial Intelligence for Online Flood Forecasting (PAI-OFF) methodology combines the reliability of physically based, hydrologic/hydraulic modelling with the operational advantages of artificial intelligence. These operational advantages are extremely low computation times and straightforward operation. The basic principle of the methodology is to portray process models by means of ANN. We propose to train ANN flood forecasting models with synthetic data that reflects the possible range of storm events. To this end, establishing PAI-OFF requires first setting up a physically based hydrologic model of the considered catchment and - optionally, if backwater effects have a significant impact on the flow regime - a hydrodynamic flood routing model of the river reach in question. Both models are subsequently used for simulating all meaningful and flood relevant storm scenarios which are obtained from a catchment specific meteorological data analysis. This provides a database of corresponding input/output vectors which is then completed by generally available hydrological and meteorological data for characterizing the catchment state prior to each storm event. This database subsequently serves for training both a polynomial neural network (PoNN) - portraying the rainfall-runoff process - and a multilayer neural network (MLFN), which mirrors the hydrodynamic flood wave propagation in the river. These two ANN models replace the hydrological and hydrodynamic model in the operational mode. After presenting the theory, we apply PAI-OFF - essentially consisting of the coupled "hydrologic" PoNN and "hydrodynamic" MLFN - to the Freiberger Mulde catchment in the Erzgebirge (Ore-mountains) in East Germany (3000 km 2). Both the demonstrated computational efficiency and the prediction reliability underline the potential of the new PAI-OFF methodology for online flood forecasting.

  5. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  6. Energy Navigation: Simulation Evaluation and Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Williams, David H.; Oseguera-Lohr, Rosa M.; Lewis, Elliot T.

    2011-01-01

    This paper presents results from two simulation studies investigating the use of advanced flight-deck-based energy navigation (ENAV) and conventional transport-category vertical navigation (VNAV) for conducting a descent through a busy terminal area, using Continuous Descent Arrival (CDA) procedures. This research was part of the Low Noise Flight Procedures (LNFP) element within the Quiet Aircraft Technology (QAT) Project, and the subsequent Airspace Super Density Operations (ASDO) research focus area of the Airspace Project. A piloted simulation study addressed development of flight guidance, and supporting pilot and Air Traffic Control (ATC) procedures for high density terminal operations. The procedures and charts were designed to be easy to understand, and to make it easy for the crew to make changes via the Flight Management Computer Control-Display Unit (FMC-CDU) to accommodate changes from ATC.

  7. Extraction of brewer's yeasts using different methods of cell disruption for practical biodiesel production.

    PubMed

    Řezanka, Tomáš; Matoulková, Dagmar; Kolouchová, Irena; Masák, Jan; Viden, Ivan; Sigler, Karel

    2015-05-01

    The methods of preparation of fatty acids from brewer's yeast and its use in production of biofuels and in different branches of industry are described. Isolation of fatty acids from cell lipids includes cell disintegration (e.g., with liquid nitrogen, KOH, NaOH, petroleum ether, nitrogenous basic compounds, etc.) and subsequent processing of extracted lipids, including analysis of fatty acid and computing of biodiesel properties such as viscosity, density, cloud point, and cetane number. Methyl esters obtained from brewer's waste yeast are well suited for the production of biodiesel. All 49 samples (7 breweries and 7 methods) meet the requirements for biodiesel quality in both the composition of fatty acids and the properties of the biofuel required by the US and EU standards.

  8. Investigating Event Memory in Children with Autism Spectrum Disorder: Effects of a Computer-Mediated Interview.

    PubMed

    Hsu, Che-Wei; Teoh, Yee-San

    2017-02-01

    The present study aimed to examine the effects of a novel avatar interviewing aid during memory interviews with children with autism spectrum disorder (ASD). Thirty children were recruited for our study (Age: M = 7.60, SD = 0.68), half with ASD (13 boys; 2 girls) and the other half being neurotypical (13 boys; 2 girls). Children participated in a target event and were subsequently interviewed a week later by either an avatar interviewer or a human. The participants were also asked six misleading questions aimed to examine their suggestibility. Bayesian analysis showed some increase in memory performance for both groups of children interviewed by the avatar interviewer, and this effect exacerbated for children with ASD. These results showed encouraging implications for future applications.

  9. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  10. A systematization of spectral data on the methanol molecule

    NASA Astrophysics Data System (ADS)

    Akhlyostin, A. Yu.; Voronina, S. S.; Lavrentiev, N. A.; Privezentsev, A. I.; Rodimova, O. B.; Fazliev, A. Z.

    2015-11-01

    Problems underlying a systematization of spectral data on the methanol molecule are formulated. Data on the energy levels and vacuum wavenumbers acquired from the published literature are presented in the form of information sources imported into the W@DIS information system. Sets of quantum numbers and labels used to describe the CH3OH molecular states are analyzed. The set of labels is different from universally accepted sets. A system of importing the data sources into W@DIS is outlined. The structure of databases characterizing transitions in an isolated CH3OH molecule is introduced and a digital library of the relevant published literature is discussed. A brief description is given of an imported data quality analysis and representation of the results obtained in the form of ontologies for subsequent computer processing.

  11. On the topology of chromatin fibres

    PubMed Central

    Barbi, Maria; Mozziconacci, Julien; Victor, Jean-Marc; Wong, Hua; Lavelle, Christophe

    2012-01-01

    The ability of cells to pack, use and duplicate DNA remains one of the most fascinating questions in biology. To understand DNA organization and dynamics, it is important to consider the physical and topological constraints acting on it. In the eukaryotic cell nucleus, DNA is organized by proteins acting as spools on which DNA can be wrapped. These proteins can subsequently interact and form a structure called the chromatin fibre. Using a simple geometric model, we propose a general method for computing topological properties (twist, writhe and linking number) of the DNA embedded in those fibres. The relevance of the method is reviewed through the analysis of magnetic tweezers single molecule experiments that revealed unexpected properties of the chromatin fibre. Possible biological implications of these results are discussed. PMID:24098838

  12. Improved 3-D turbomachinery CFD algorithm

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1988-01-01

    The building blocks of a computer algorithm developed for the time-accurate flow analysis of rotating machines are described. The flow model is a finite volume method utilizing a high resolution approximate Riemann solver for interface flux definitions. This block LU implicit numerical scheme possesses apparent unconditional stability. Multi-block composite gridding is used to orderly partition the field into a specified arrangement. Block interfaces, including dynamic interfaces, are treated such as to mimic interior block communication. Special attention is given to the reduction of in-core memory requirements by placing the burden on secondary storage media. Broad applicability is implied, although the results presented are restricted to that of an even blade count configuration. Several other configurations are presently under investigation, the results of which will appear in subsequent publications.

  13. Fast and accurate automated cell boundary determination for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  14. Initial verification and validation of RAZORBACK - A research reactor transient analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2015-09-01

    This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less

  15. Ares I-X Post Flight Ignition Overpressure Review

    NASA Technical Reports Server (NTRS)

    Alvord, David A.

    2010-01-01

    Ignition Overpressure (IOP) is an unsteady fluid flow and acoustic phenomena caused by the rapid expansion of gas from the rocket nozzle within a ducted launching space resulting in an initially higher amplitude pressure wave. This wave is potentially dangerous to the structural integrity of the vehicle. An in-depth look at the IOP environments resulting from the Ares I-X Solid Rocket Booster configuration showed high correlation between the pre-flight predictions and post-flight analysis results. Correlation between the chamber pressure and IOP transients showed successful acoustic mitigation, containing the strongest IOP waves below the Mobile Launch Pad deck. The flight data allowed subsequent verification and validation of Ares I-X unsteady fluid ducted launcher predictions, computational fluid dynamic models, and strong correlation with historical Shuttle data.

  16. PGMS: a case study of collecting PDA-based geo-tagged malaria-related survey data.

    PubMed

    Zhou, Ying; Lobo, Neil F; Wolkon, Adam; Gimnig, John E; Malishee, Alpha; Stevenson, Jennifer; Sulistyawati; Collins, Frank H; Madey, Greg

    2014-09-01

    Using mobile devices, such as personal digital assistants (PDAs), smartphones, tablet computers, etc., to electronically collect malaria-related field data is the way for the field questionnaires in the future. This case study seeks to design a generic survey framework PDA-based geo-tagged malaria-related data collection tool (PGMS) that can be used not only for large-scale community-level geo-tagged electronic malaria-related surveys, but also for a wide variety of electronic data collections of other infectious diseases. The framework includes two parts: the database designed for subsequent cross-sectional data analysis and the customized programs for the six study sites (two in Kenya, three in Indonesia, and one in Tanzania). In addition to the framework development, we also present our methods used when configuring and deploying the PDAs to 1) reduce data entry errors, 2) conserve battery power, 3) field install the programs onto dozens of handheld devices, 4) translate electronic questionnaires into local languages, 5) prevent data loss, and 6) transfer data from PDAs to computers for future analysis and storage. Since 2008, PGMS has successfully accomplished quite a few surveys that recorded 10,871 compounds and households, 52,126 persons, and 17,100 bed nets from the six sites. These numbers are still growing. © The American Society of Tropical Medicine and Hygiene.

  17. Studies of Flame Structure in Microgravity

    NASA Technical Reports Server (NTRS)

    Law, C. K.; Sung, C. J.; Zhu, D. L.

    1997-01-01

    The present research endeavor is concerned with gaining fundamental understanding of the configuration, structure, and dynamics of laminar premixed and diffusion flames under conditions of negligible effects of gravity. Of particular interest is the potential to establish and hence study the properties of spherically- and cylindrically-symmetric flames and their response to external forces not related to gravity. For example, in an earlier experimental study of the burner-stabilized cylindrical premixed flames, the possibility of flame stabilization through flow divergence was established, while the resulting one-dimensional, adiabatic, stretchless flame also allowed an accurate means of determining the laminar flame speeds of combustible mixtures. We have recently extended our studies of the flame structure in microgravity along the following directions: (1) Analysis of the dynamics of spherical premixed flames; (2) Analysis of the spreading of cylindrical diffusion flames; (3) Experimental observation of an interesting dual luminous zone structure of a steady-state, microbuoyancy, spherical diffusion flame of air burning in a hydrogen/methane mixture environment, and its subsequent quantification through computational simulation with detailed chemistry and transport; (4) Experimental quantification of the unsteady growth of a spherical diffusion flame; and (5) Computational simulation of stretched, diffusionally-imbalanced premixed flames near and beyond the conventional limits of flammability, and the substantiation of the concept of extended limits of flammability. Motivation and results of these investigations are individually discussed.

  18. Increased detection of Barrett’s esophagus and esophageal dysplasia with adjunctive use of wide-area transepithelial sample with three-dimensional computer-assisted analysis (WATS)

    PubMed Central

    Gross, Seth A; Smith, Michael S; Kaul, Vivek

    2017-01-01

    Background Barrett’s esophagus (BE) and esophageal dysplasia (ED) are frequently missed during screening and surveillance esophagoscopy because of sampling error associated with four-quadrant random forceps biopsy (FB). Aim The aim of this article is to determine if wide-area transepithelial sampling with three-dimensional computer-assisted analysis (WATS) used adjunctively with FB can increase the detection of BE and ED. Methods In this multicenter prospective trial, patients screened for suspected BE and those with known BE undergoing surveillance were enrolled. Patients at 25 community-based practices underwent WATS adjunctively to targeted FB and random four-quadrant FB. Results Of 4203 patients, 594 were diagnosed with BE by FB alone, and 493 additional cases were detected by adding WATS, increasing the overall detection of BE by 83% (493/594, 95% CI 74%–93%). Low-grade dysplasia (LGD) was diagnosed in 26 patients by FB alone, and 23 additional cases were detected by adding WATS, increasing the detection of LGD by 88.5% (23/26, 95% CI 48%–160%). Conclusions Adjunctive use of WATS to FB significantly improves the detection of both BE and ED. Sampling error, an inherent limitation associated with screening and surveillance, can be improved with WATS allowing better informed decisions to be made about the management and subsequent treatment of these patients. PMID:29881608

  19. Investigation of cellular detonation structure formation via linear stability theory and 2D and 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Borisov, S. P.; Kudryavtsev, A. N.

    2017-10-01

    Linear and nonlinear stages of the instability of a plane detonation wave (DW) and the subsequent process of formation of cellular detonation structure are investigated. A simple model with one-step irreversible chemical reaction is used. The linear analysis is employed to predict the DW front structure at the early stages of its formation. An emerging eigenvalue problem is solved with a global method using a Chebyshev pseudospectral method and the LAPACK software library. A local iterative shooting procedure is used for eigenvalue refinement. Numerical simulations of a propagation of a DW in plane and rectangular channels are performed with a shock capturing WENO scheme of 5th order. A special method of a computational domain shift is implemented in order to maintain the DW in the domain. It is shown that the linear analysis gives certain predictions about the DW structure that are in agreement with the numerical simulations of early stages of DW propagation. However, at later stages, a merger of detonation cells occurs so that their number is approximately halved. Computations of DW propagation in a square channel reveal two different types of spatial structure of the DW front, "rectangular" and "diagonal" types. A spontaneous transition from the rectangular to diagonal type of structure is observed during propagation of the DW.

  20. Aneurysm miRNA Signature Differs, Depending on Disease Localization and Morphology

    PubMed Central

    Busch, Albert; Busch, Martin; Scholz, Claus-Jürgen; Kellersmann, Richard; Otto, Christoph; Chernogubova, Ekaterina; Maegdefessel, Lars; Zernecke, Alma; Lorenz, Udo

    2016-01-01

    Limited comprehension of aneurysm pathology has led to inconclusive results from clinical trials. miRNAs are key regulators of post-translational gene modification and are useful tools in elucidating key features of aneurysm pathogenesis in distinct entities of abdominal and popliteal aneurysms. Here, surgically harvested specimens from 19 abdominal aortic aneurysm (AAA) and 8 popliteal artery aneurysm (PAA) patients were analyzed for miRNA expression and histologically classified regarding extracellular matrix (ECM) remodeling and inflammation. DIANA-based computational target prediction and pathway enrichment analysis verified our results, as well as previous ones. miRNA-362, -19b-1, -194, -769, -21 and -550 were significantly down-regulated in AAA samples depending on degree of inflammation. Similar or inverse regulation was found for miR-769, 19b-1 and miR-550, -21, whereas miR-194 and -362 were unaltered in PAA. In situ hybridization verified higher expression of miR-550 and -21 in PAA compared to AAA and computational analysis for target genes and pathway enrichment affirmed signal transduction, cell-cell-interaction and cell degradation pathways, in line with previous results. Despite the vague role of miRNAs for potential diagnostic and treatment purposes, the number of candidates from tissue signature studies is increasing. Tissue morphology influences subsequent research, yet comparison of distinct entities of aneurysm disease can unravel core pathways. PMID:26771601

  1. Volumetric characterization of human patellar cartilage matrix on phase contrast x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Abidin, Anas Z.; Nagarajan, Mahesh B.; Checefsky, Walter A.; Coan, Paola; Diemoz, Paul C.; Hobbs, Susan K.; Huber, Markus B.; Wismüller, Axel

    2015-03-01

    Phase contrast X-ray computed tomography (PCI-CT) has recently emerged as a novel imaging technique that allows visualization of cartilage soft tissue, subsequent examination of chondrocyte patterns, and their correlation to osteoarthritis. Previous studies have shown that 2D texture features are effective at distinguishing between healthy and osteoarthritic regions of interest annotated in the radial zone of cartilage matrix on PCI-CT images. In this study, we further extend the texture analysis to 3D and investigate the ability of volumetric texture features at characterizing chondrocyte patterns in the cartilage matrix for purposes of classification. Here, we extracted volumetric texture features derived from Minkowski Functionals and gray-level co-occurrence matrices (GLCM) from 496 volumes of interest (VOI) annotated on PCI-CT images of human patellar cartilage specimens. The extracted features were then used in a machine-learning task involving support vector regression to classify ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC). The best classification performance was observed with GLCM features correlation (AUC = 0.83 +/- 0.06) and homogeneity (AUC = 0.82 +/- 0.07), which significantly outperformed all Minkowski Functionals (p < 0.05). These results suggest that such quantitative analysis of chondrocyte patterns in human patellar cartilage matrix involving GLCM-derived statistical features can distinguish between healthy and osteoarthritic tissue with high accuracy.

  2. Higher-Order Theory for Functionally Graded Materials

    NASA Technical Reports Server (NTRS)

    Aboudi, J.; Pindera, M. J.; Arnold, Steven M.

    2001-01-01

    Functionally graded materials (FGM's) are a new generation of engineered materials wherein the microstructural details are spatially varied through nonuniform distribution of the reinforcement phase(s). Engineers accomplish this by using reinforcements with different properties, sizes, and shapes, as well as by interchanging the roles of the reinforcement and matrix phases in a continuous manner (ref. 1). The result is a microstructure that produces continuously or discretely changing thermal and mechanical properties at the macroscopic or continuum scale. This new concept of engineering the material's microstructure marks the beginning of a revolution both in the materials science and mechanics of materials areas since it allows one, for the first time, to fully integrate the material and structural considerations into the final design of structural components. Functionally graded materials are ideal candidates for applications involving severe thermal gradients, ranging from thermal structures in advanced aircraft and aerospace engines to computer circuit boards. Owing to the many variables that control the design of functionally graded microstructures, full exploitation of the FGM's potential requires the development of appropriate modeling strategies for their response to combined thermomechanical loads. Previously, most computational strategies for the response of FGM's did not explicitly couple the material's heterogeneous microstructure with the structural global analysis. Rather, local effective or macroscopic properties at a given point within the FGM were first obtained through homogenization based on a chosen micromechanics scheme and then subsequently used in a global thermomechanical analysis.

  3. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    PubMed

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  4. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    USGS Publications Warehouse

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.

  5. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  6. RIEMS: a software pipeline for sensitive and comprehensive taxonomic classification of reads from metagenomics datasets.

    PubMed

    Scheuch, Matthias; Höper, Dirk; Beer, Martin

    2015-03-03

    Fuelled by the advent and subsequent development of next generation sequencing technologies, metagenomics became a powerful tool for the analysis of microbial communities both scientifically and diagnostically. The biggest challenge is the extraction of relevant information from the huge sequence datasets generated for metagenomics studies. Although a plethora of tools are available, data analysis is still a bottleneck. To overcome the bottleneck of data analysis, we developed an automated computational workflow called RIEMS - Reliable Information Extraction from Metagenomic Sequence datasets. RIEMS assigns every individual read sequence within a dataset taxonomically by cascading different sequence analyses with decreasing stringency of the assignments using various software applications. After completion of the analyses, the results are summarised in a clearly structured result protocol organised taxonomically. The high accuracy and performance of RIEMS analyses were proven in comparison with other tools for metagenomics data analysis using simulated sequencing read datasets. RIEMS has the potential to fill the gap that still exists with regard to data analysis for metagenomics studies. The usefulness and power of RIEMS for the analysis of genuine sequencing datasets was demonstrated with an early version of RIEMS in 2011 when it was used to detect the orthobunyavirus sequences leading to the discovery of Schmallenberg virus.

  7. 48 CFR 31.105 - Construction and architect-engineer contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., include unallowable interest costs, or use improper cost of money rates or computations. Contracting... contracts, including cost-reimbursement subcontracts thereunder; (2) Negotiating indirect cost rates; (3... appropriate they serve to express the parties' understanding and avoid possible subsequent disputes or...

  8. Investigation of a novel approach to scoring Giemsa-stained malaria-infected thin blood films.

    PubMed

    Proudfoot, Owen; Drew, Nathan; Scholzen, Anja; Xiang, Sue; Plebanski, Magdalena

    2008-04-21

    Daily assessment of the percentage of erythrocytes that are infected ('percent-parasitaemia') across a time-course is a necessary step in many experimental studies of malaria, but represents a time-consuming and unpopular task among researchers. The most common method is extensive microscopic examination of Giemsa-stained thin blood-films. This study explored a method for the assessment of percent-parasitaemia that does not require extended periods of microscopy and results in a descriptive and permanent record of parasitaemia data that is highly amenable to subsequent 'data-mining'. Digital photography was utilized in conjunction with a basic purpose-written computer programme to test the viability of the concept. Partial automation of the determination of percent parasitaemia was then explored, resulting in the successful customization of commercially available broad-spectrum image analysis software towards this aim. Lastly, automated discrimination between infected and uninfected RBCs based on analysis of digital parameters of individual cell images was explored in an effort to completely automate the calculation of an accurate percent-parasitaemia.

  9. An improved method of measuring heart rate using a webcam

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Ouyang, Jianfei; Yan, Yonggang

    2014-09-01

    Measuring heart rate traditionally requires special equipment and physical contact with the subject. Reliable non-contact and low-cost measurements are highly desirable for convenient and comfortable physiological self-assessment. Previous work has shown that consumer-grade cameras can provide useful signals for remote heart rate measurements. In this paper a simple and robust method of measuring the heart rate using low-cost webcam is proposed. Blood volume pulse is extracted by proper Region of Interest (ROI) and color channel selection from image sequences of human faces without complex computation. Heart rate is subsequently quantified by spectrum analysis. The method is successfully applied under natural lighting conditions. Results of experiments show that it takes less time, is much simpler, and has similar accuracy to the previously published and widely used method of Independent Component Analysis (ICA). Benefitting from non-contact, convenience, and low-costs, it provides great promise for popularization of home healthcare and can further be applied to biomedical research.

  10. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  11. Zaba: a novel miniature transposable element present in genomes of legume plants.

    PubMed

    Macas, J; Neumann, P; Pozárková, D

    2003-08-01

    A novel family of miniature transposable elements, named Zaba, was identified in pea (Pisum sativum) and subsequently also in other legume species using computer analysis of their DNA sequences. Zaba elements are 141-190 bp long, generate 10-bp target site duplications, and their terminal inverted repeats make up most of the sequence. Zaba elements thus resemble class 3 foldback transposons. The elements are only moderately repetitive in pea (tens to hundreds copies per haploid genome), but they are present in up to thousands of copies in the genomes of several Medicago and Vicia species. More detailed analysis of the elements from pea, including isolation of new sequences from a genomic library, revealed that a fraction of these elements are truncated, and that their last transposition probably did not occur recently. A search for Zaba sequences in EST databases showed that at least some elements are transcribed, most probably due to their association with genic regions.

  12. Resilience of branching and massive corals to wave loading under sea level rise--a coupled computational fluid dynamics-structural analysis.

    PubMed

    Baldock, Tom E; Karampour, Hassan; Sleep, Rachael; Vyltla, Anisha; Albermani, Faris; Golshani, Aliasghar; Callaghan, David P; Roff, George; Mumby, Peter J

    2014-09-15

    Measurements of coral structural strength are coupled with a fluid dynamics-structural analysis to investigate the resilience of coral to wave loading under sea level rise and a typical Great Barrier Reef lagoon wave climate. The measured structural properties were used to determine the wave conditions and flow velocities that lead to structural failure. Hydrodynamic modelling was subsequently used to investigate the type of the bathymetry where coral is most vulnerable to breakage under cyclonic wave conditions, and how sea level rise (SLR) changes this vulnerability. Massive corals are determined not to be vulnerable to wave induced structural damage, whereas branching corals are susceptible at wave induced orbital velocities exceeding 0.5m/s. Model results from a large suite of idealised bathymetry suggest that SLR of 1m or a loss of skeleton strength of order 25% significantly increases the area of reef flat where branching corals are exposed to damaging wave induced flows. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Phage phenomics: Physiological approaches to characterize novel viral proteins

    ScienceCinema

    Sanchez, Savannah E. [San Diego State Univ., San Diego, CA (United States); Cuevas, Daniel A. [San Diego State Univ., San Diego, CA (United States); Rostron, Jason E. [San Diego State Univ., San Diego, CA (United States); Liang, Tiffany Y. [San Diego State Univ., San Diego, CA (United States); Pivaroff, Cullen G. [San Diego State Univ., San Diego, CA (United States); Haynes, Matthew R. [San Diego State Univ., San Diego, CA (United States); Nulton, Jim [San Diego State Univ., San Diego, CA (United States); Felts, Ben [San Diego State Univ., San Diego, CA (United States); Bailey, Barbara A. [San Diego State Univ., San Diego, CA (United States); Salamon, Peter [San Diego State Univ., San Diego, CA (United States); Edwards, Robert A. [San Diego State Univ., San Diego, CA (United States); Argonne National Lab. (ANL), Argonne, IL (United States); Burgin, Alex B. [Broad Institute, Cambridge, MA (United States); Segall, Anca M. [San Diego State Univ., San Diego, CA (United States); Rohwer, Forest [San Diego State Univ., San Diego, CA (United States)

    2018-06-21

    Current investigations into phage-host interactions are dependent on extrapolating knowledge from (meta)genomes. Interestingly, 60 - 95% of all phage sequences share no homology to current annotated proteins. As a result, a large proportion of phage genes are annotated as hypothetical. This reality heavily affects the annotation of both structural and auxiliary metabolic genes. Here we present phenomic methods designed to capture the physiological response(s) of a selected host during expression of one of these unknown phage genes. Multi-phenotype Assay Plates (MAPs) are used to monitor the diversity of host substrate utilization and subsequent biomass formation, while metabolomics provides bi-product analysis by monitoring metabolite abundance and diversity. Both tools are used simultaneously to provide a phenotypic profile associated with expression of a single putative phage open reading frame (ORF). Thus, representative results for both methods are compared, highlighting the phenotypic profile differences of a host carrying either putative structural or metabolic phage genes. In addition, the visualization techniques and high throughput computational pipelines that facilitated experimental analysis are presented.

  14. Quantitative structure-activity relationships by neural networks and inductive logic programming. I. The inhibition of dihydrofolate reductase by pyrimidines

    NASA Astrophysics Data System (ADS)

    Hirst, Jonathan D.; King, Ross D.; Sternberg, Michael J. E.

    1994-08-01

    Neural networks and inductive logic programming (ILP) have been compared to linear regression for modelling the QSAR of the inhibition of E. coli dihydrofolate reductase (DHFR) by 2,4-diamino-5-(substitured benzyl)pyrimidines, and, in the subsequent paper [Hirst, J.D., King, R.D. and Sternberg, M.J.E., J. Comput.-Aided Mol. Design, 8 (1994) 421], the inhibition of rodent DHFR by 2,4-diamino-6,6-dimethyl-5-phenyl-dihydrotriazines. Cross-validation trials provide a statistically rigorous assessment of the predictive capabilities of the methods, with training and testing data selected randomly and all the methods developed using identical training data. For the ILP analysis, molecules are represented by attributes other than Hansch parameters. Neural networks and ILP perform better than linear regression using the attribute representation, but the difference is not statistically significant. The major benefit from the ILP analysis is the formulation of understandable rules relating the activity of the inhibitors to their chemical structure.

  15. Thermomagnetic instabilities in a vertical layer of ferrofluid: nonlinear analysis away from a critical point

    NASA Astrophysics Data System (ADS)

    Dey, Pinkee; Suslov, Sergey A.

    2016-12-01

    A finite amplitude instability has been analysed to discover the exact mechanism leading to the appearance of stationary magnetoconvection patterns in a vertical layer of a non-conducting ferrofluid heated from the side and placed in an external magnetic field perpendicular to the walls. The physical results have been obtained using a version of a weakly nonlinear analysis that is based on the disturbance amplitude expansion. It enables a low-dimensional reduction of a full nonlinear problem in supercritical regimes away from a bifurcation point. The details of the reduction are given in comparison with traditional small-parameter expansions. It is also demonstrated that Squire’s transformation can be introduced for higher-order nonlinear terms thus reducing the full three-dimensional problem to its equivalent two-dimensional counterpart and enabling significant computational savings. The full three-dimensional instability patterns are subsequently recovered using the inverse transforms The analysed stationary thermomagnetic instability is shown to occur as a result of a supercritical pitchfork bifurcation.

  16. Introduction and Highlights of the Workshop

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Venneri, Samuel L.

    1997-01-01

    Four generations of CAD/CAM systems can be identified, corresponding to changes in both modeling functionality and software architecture. The systems evolved from 2D and wireframes to solid modeling, to parametric/variational modelers to the current simulation-embedded systems. Recent developments have enabled design engineers to perform many of the complex analysis tasks, typically performed by analysis experts. Some of the characteristics of the current and emerging CAD/CAM/CAE systems are described in subsequent presentations. The focus of the workshop is on the potential of CAD/CAM/CAE systems for use in simulating the entire mission and life-cycle of future aerospace systems, and the needed development to realize this potential. First, the major features of the emerging computing, communication and networking environment are outlined; second, the characteristics and design drivers of future aerospace systems are identified; third, the concept of intelligent synthesis environment being planned by NASA, the UVA ACT Center and JPL is presented; and fourth, the objectives and format of the workshop are outlined.

  17. RI 1170 advanced strapdown gyro

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The major components of the RI 1170 gyroscope are described. A detailed functional description of the electronics including block diagrams and photographs of output waveshapes within the loop electronics are presented. An electronic data flow diagram is included. Those gyro subassemblies that were originally planned and subsequently changed or modified for one reason or another are discussed in detail. Variations to the original design included the capacitive pickoffs, torquer flexleads, magnetic suspension, gas bearings, electronic design, and packaging. The selection of components and changes from the original design and components selected are discussed. Device failures experienced throughout the program are reported and design corrections to eliminate the failure modes are noted. Major design deficiencies such as those of the MSE electronics are described in detail. Modifications made to the gas bearing parts and design improvements to the wheel are noted. Changes to the gas bearing prints are included as well as a mathematical analysis of the 1170 gas bearing wheel by computer analysis. The mean free-path effects on gas bearing performance is summarized.

  18. Neuroimaging with functional near infrared spectroscopy: From formation to interpretation

    NASA Astrophysics Data System (ADS)

    Herrera-Vega, Javier; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe

    2017-09-01

    Functional Near Infrared Spectroscopy (fNIRS) is gaining momentum as a functional neuroimaging modality to investigate the cerebral hemodynamics subsequent to neural metabolism. As other neuroimaging modalities, it is neuroscience's tool to understand brain systems functions at behaviour and cognitive levels. To extract useful knowledge from functional neuroimages it is critical to understand the series of transformations applied during the process of the information retrieval and how they bound the interpretation. This process starts with the irradiation of the head tissues with infrared light to obtain the raw neuroimage and proceeds with computational and statistical analysis revealing hidden associations between pixels intensities and neural activity encoded to end up with the explanation of some particular aspect regarding brain function.To comprehend the overall process involved in fNIRS there is extensive literature addressing each individual step separately. This paper overviews the complete transformation sequence through image formation, reconstruction and analysis to provide an insight of the final functional interpretation.

  19. Process yield improvements with process control terminal for varian serial ion implanters

    NASA Astrophysics Data System (ADS)

    Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken

    Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.

  20. Characterization of the Acoustic Radiation Properties of Laminated and Sandwich Composite Panels in Thermal Environment

    NASA Astrophysics Data System (ADS)

    Sharma, Nitin; Ranjan Mahapatra, Trupti; Panda, Subrata Kumar; Sahu, Pruthwiraj

    2018-03-01

    In this article, the acoustic radiation characteristics of laminated and sandwich composite spherical panels subjected to harmonic point excitation under thermal environment are investigated. The finite element (FE) simulation model of the vibrating panel structure is developed in ANSYS using ANSYS parametric design language (APDL) code. Initially, the critical buckling temperatures of the considered structures are obtained and the temperature loads are assorted accordingly. Then, the modal analysis of the thermally stressed panels is performed and the thermo-elastic free vibration responses so obtained are validated with the benchmark solutions. Subsequently, an indirect boundary element (BE) method is utilized to conduct a coupled FE-BE analysis to compute the sound radiation properties of panel structure. The agreement of the present sound power responses with the existing results available in the published literature establishes the validity of the proposed scheme. Finally, the current standardised scheme is extended to solve several numerical examples to bring out the influence of various parameters on the thermo-acoustic characteristics of laminated composite panels.

Top