Sample records for comparing computed results

  1. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    ERIC Educational Resources Information Center

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  2. Heats of Segregation of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  3. Computer Education for Engineers, Part III.

    ERIC Educational Resources Information Center

    McCullough, Earl S.; Lofy, Frank J.

    1989-01-01

    Reports the results of the third survey of computer use in engineering education conducted in the fall of 1987 in comparing with 1981 and 1984 results. Summarizes survey data on computer course credits, languages, equipment use, CAD/CAM instruction, faculty access, and computer graphics. (YP)

  4. 3D CFD Quantification of the Performance of a Multi-Megawatt Wind Turbine

    NASA Astrophysics Data System (ADS)

    Laursen, J.; Enevoldsen, P.; Hjort, S.

    2007-07-01

    This paper presents the results of 3D CFD rotor computations of a Siemens SWT-2.3-93 variable speed wind turbine with 45m blades. In the paper CFD is applied to a rotor at stationary wind conditions without wind shear, using the commercial multi-purpose CFD-solvers ANSYS CFX 10.0 and 11.0. When comparing modelled mechanical effects with findings from other models and measurements, good agreement is obtained. Similarly the computed force distributions compare very well, whereas some discrepancies are found when comparing with an in-house BEM model. By applying the reduced axial velocity method the local angle of attack has been derived from the CFD solutions, and from this knowledge and the computed force distributions, local airfoil profile coefficients have been computed and compared to BEM airfoil coefficients. Finally, the transition model of Langtry and Menter is tested on the rotor, and the results are compared with the results from the fully turbulent setup.

  5. Proof test of the computer program BUCKY for plasticity problems

    NASA Technical Reports Server (NTRS)

    Smith, James P.

    1994-01-01

    A theoretical equation describing the elastic-plastic deformation of a cantilever beam subject to a constant pressure is developed. The theoretical result is compared numerically to the computer program BUCKY for the case of an elastic-perfectly plastic specimen. It is shown that the theoretical and numerical results compare favorably in the plastic range. Comparisons are made to another research code to further validate the BUCKY results. This paper serves as a quality test for the computer program BUCKY developed at NASA Johnson Space Center.

  6. Computation of shock wave/target interaction

    NASA Technical Reports Server (NTRS)

    Mark, A.; Kutler, P.

    1983-01-01

    Computational results of shock waves impinging on targets and the ensuing diffraction flowfield are presented. A number of two-dimensional cases are computed with finite difference techniques. The classical case of a shock wave/cylinder interaction is compared with shock tube data and shows the quality of the computations on a pressure-time plot. Similar results are obtained for a shock wave/rectangular body interaction. Here resolution becomes important and the use of grid clustering techniques tend to show good agreement with experimental data. Computational results are also compared with pressure data resulting from shock impingement experiments for a complicated truck-like geometry. Here of significance are the grid generation and clustering techniques used. For these very complicated bodies, grids are generated by numerically solving a set of elliptic partial differential equations.

  7. Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.

  8. FUN3D Analyses in Support of the Second Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Heeg, Jennifer

    2016-01-01

    This paper presents the computational aeroelastic results generated in support of the second Aeroelastic Prediction Workshop for the Benchmark Supercritical Wing (BSCW) configurations and compares them to the experimental data. The computational results are obtained using FUN3D, an unstructured grid Reynolds- Averaged Navier-Stokes solver developed at NASA Langley Research Center. The analysis results include aerodynamic coefficients and surface pressures obtained for steady-state, static aeroelastic equilibrium, and unsteady flow due to a pitching wing or flutter prediction. Frequency response functions of the pressure coefficients with respect to the angular displacement are computed and compared with the experimental data. The effects of spatial and temporal convergence on the computational results are examined.

  9. Heats of Segregation of BCC Binaries from ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2004-01-01

    We compare dilute-limit heats of segregation for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent LMTO-based parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation, while the ab initio calculations are performed without relaxation. Results are discussed within the context of a segregation model driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  10. Focus on Clinical Research: Cognitive Rehabilitation of Severely Closed-Head-Injured Patients Using Computer-Assisted and Noncomputerized Treatment Techniques.

    ERIC Educational Resources Information Center

    Batchelor, J.; And Others

    1988-01-01

    The study compared computer assisted cognitive retraining of 47 patients with severe closed head injury with comparable noncomputerized treatment techniques. Results on neuropsychological tests did not support the increased effectiveness of the computer assisted cognitive therapy. (DB)

  11. Comparison of Computational Results with a Low-g, Nitrogen Slosh and Boiling Experiment

    NASA Technical Reports Server (NTRS)

    Stewart, Mark; Moder, Jeff

    2015-01-01

    The proposed paper will compare a fluid/thermal simulation, in FLUENT, with a low-g, nitrogen slosh experiment. The French Space Agency, CNES, performed cryogenic nitrogen experiments in several zero gravity aircraft campaigns. The computational results have been compared with high-speed photographic data, pressure data, and temperature data from sensors on the axis of the cylindrically shaped tank. The comparison between these experimental and computational results is generally favorable: the initial temperature stratification is in good agreement, and the two-phase fluid motion is qualitatively captured.

  12. A comparison of computer-generated lift and drag polars for a Wortmann airfoil to flight and wind tunnel results

    NASA Technical Reports Server (NTRS)

    Bowers, A. H.; Sandlin, D. R.

    1984-01-01

    Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared to both wind tunnel and flight results. Excellent correlation is shown to exist between computations and flight results except when separated flow regimes were encountered. Wind tunnel transition locations are shown to agree with computed predictions. Smoothness of the input coordinates to the PROFILE airfoil analysis computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel results.

  13. Metrics for comparing dynamic earthquake rupture simulations

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  14. Computer measurement of particle sizes in electron microscope images

    NASA Technical Reports Server (NTRS)

    Hall, E. L.; Thompson, W. B.; Varsi, G.; Gauldin, R.

    1976-01-01

    Computer image processing techniques have been applied to particle counting and sizing in electron microscope images. Distributions of particle sizes were computed for several images and compared to manually computed distributions. The results of these experiments indicate that automatic particle counting within a reasonable error and computer processing time is feasible. The significance of the results is that the tedious task of manually counting a large number of particles can be eliminated while still providing the scientist with accurate results.

  15. Comparability of Computer- and Paper-Administered Multiple-Choice Tests for K-12 Populations: A Synthesis

    ERIC Educational Resources Information Center

    Kingston, Neal M.

    2009-01-01

    There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…

  16. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  17. The use of conduction model in laser weld profile computation

    NASA Astrophysics Data System (ADS)

    Grabas, Bogusław

    2007-02-01

    Profiles of joints resulting from deep penetration laser beam welding of a flat workpiece of carbon steel were computed. A semi-analytical conduction model solved with Green's function method was used in computations. In the model, the moving heat source was attenuated exponentially in accordance with Beer-Lambert law. Computational results were compared with those in the experiment.

  18. The comparison of various approach to evaluation erosion risks and design control erosion measures

    NASA Astrophysics Data System (ADS)

    Kapicka, Jiri

    2015-04-01

    In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.

  19. A comparison of Wortmann airfoil computer-generated lift and drag polars with flight and wind tunnel results

    NASA Technical Reports Server (NTRS)

    Bowers, A. H.; Sim, A. G.

    1984-01-01

    Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared with both wind tunnel and flight test results. Excellent correlation was shown to exist between computations and flight results except when separated flow regimes were encountered. Smoothness of the input coordinates to the PROFILE computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel flight results.

  20. Mid-term survival analysis of closed wedge high tibial osteotomy: A comparative study of computer-assisted and conventional techniques.

    PubMed

    Bae, Dae Kyung; Song, Sang Jun; Kim, Kang Il; Hur, Dong; Jeong, Ho Yeon

    2016-03-01

    The purpose of the present study was to compare the clinical and radiographic results and survival rates between computer-assisted and conventional closing wedge high tibial osteotomies (HTOs). Data from a consecutive cohort comprised of 75 computer-assisted HTOs and 75 conventional HTOs were retrospectively reviewed. The Knee Society knee and function scores, Hospital for Special Surgery (HSS) score and femorotibial angle (FTA) were compared between the two groups. Survival rates were also compared with procedure failure. The knee and function scores at one year postoperatively were slightly better in the computer-assisted group than those in conventional group (90.1 vs. 86.1) (82.0 vs. 76.0). The HSS scores at one year postoperatively were slightly better for the computer-assisted HTOs than those of conventional HTOs (89.5 vs. 81.8). The inlier of the postoperative FTA was wider in the computer-assisted group than that in the conventional HTO group (88.0% vs. 58.7%), and mean postoperative FTA was greater in the computer-assisted group that in the conventional HTO group (valgus 9.0° vs. valgus 7.6°, p<0.001). The five- and 10-year survival rates were 97.1% and 89.6%, respectively. No difference was detected in nine-year survival rates (p=0.369) between the two groups, although the clinical and radiographic results were better in the computer-assisted group that those in the conventional HTO group. Mid-term survival rates did not differ between computer-assisted and conventional HTOs. A comparative analysis of longer-term survival rate is required to demonstrate the long-term benefit of computer-assisted HTO. III. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Statistical evaluation of the metallurgical test data in the ORR-PSF-PVS irradiation experiment. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stallmann, F.W.

    1984-08-01

    A statistical analysis of Charpy test results of the two-year Pressure Vessel Simulation metallurgical irradiation experiment was performed. Determination of transition temperature and upper shelf energy derived from computer fits compare well with eyeball fits. Uncertainties for all results can be obtained with computer fits. The results were compared with predictions in Regulatory Guide 1.99 and other irradiation damage models.

  2. Cloud computing for comparative genomics

    PubMed Central

    2010-01-01

    Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786

  3. A novel patient-specific model to compute coronary fractional flow reserve.

    PubMed

    Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo

    2014-09-01

    The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.

  4. IFCPT S-Duct Grid-Adapted FUN3D Computations for the Third Propulsion Aerodynamics Works

    NASA Technical Reports Server (NTRS)

    Davis, Zach S.; Park, M. A.

    2017-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code, FUN3D, to the 3rd AIAA Propulsion Aerodynamics Workshop are described for the diffusing IFCPT S-Duct. Using workshop-supplied grids, results for the baseline S-Duct, baseline S-Duct with Aerodynamic Interface Plane (AIP) rake hardware, and baseline S-Duct with flow control devices are compared with experimental data and results computed with output-based, off-body grid adaptation in FUN3D. Due to the absence of influential geometry components, total pressure recovery is overpredicted on the baseline S-Duct and S-Duct with flow control vanes when compared to experimental values. An estimate for the exact value of total pressure recovery is derived for these cases given an infinitely refined mesh. When results from output-based mesh adaptation are compared with those computed on workshop-supplied grids, a considerable improvement in predicting total pressure recovery is observed. By including more representative geometry, output-based mesh adaptation compares very favorably with experimental data in terms of predicting the total pressure recovery cost-function; whereas, results computed using the workshop-supplied grids are underpredicted.

  5. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  6. Calculation of inviscid flow over shuttle-like vehicles at high angles of attack and comparisons with experimental data

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Hamilton, H. H., II

    1983-01-01

    A computer code HALIS, designed to compute the three dimensional flow about shuttle like configurations at angles of attack greater than 25 deg, is described. Results from HALIS are compared where possible with an existing flow field code; such comparisons show excellent agreement. Also, HALIS results are compared with experimental pressure distributions on shuttle models over a wide range of angle of attack. These comparisons are excellent. It is demonstrated that the HALIS code can incorporate equilibrium air chemistry in flow field computations.

  7. Super-resolution using a light inception layer in convolutional neural network

    NASA Astrophysics Data System (ADS)

    Mou, Qinyang; Guo, Jun

    2018-04-01

    Recently, several models based on CNN architecture have achieved great result on Single Image Super-Resolution (SISR) problem. In this paper, we propose an image super-resolution method (SR) using a light inception layer in convolutional network (LICN). Due to the strong representation ability of our well-designed inception layer that can learn richer representation with less parameters, we can build our model with shallow architecture that can reduce the effect of vanishing gradients problem and save computational costs. Our model strike a balance between computational speed and the quality of the result. Compared with state-of-the-art result, we produce comparable or better results with faster computational speed.

  8. USSR Report: Cybernetics, Computers and Automation Technology. No. 69.

    DTIC Science & Technology

    1983-05-06

    computers in multiprocessor and multistation design , control and scientific research automation systems. The results of comparing the efficiency of...Podvizhnaya, Scientific Research Institute of Control Computers, Severodonetsk] [Text] The most significant change in the design of the SM-2M compared to...UPRAVLYAYUSHCHIYE SISTEMY I MASHINY, Nov-Dec 82) 95 APPLICATIONS Kiev Automated Control System, Design Features and Prospects for Development (V. A

  9. Comparing the social skills of students addicted to computer games with normal students.

    PubMed

    Zamani, Eshrat; Kheradmand, Ali; Cheshmi, Maliheh; Abedi, Ahmad; Hedayati, Nasim

    2010-01-01

    This study aimed to investigate and compare the social skills of studentsaddicted to computer games with normal students. The dependentvariable in the present study is the social skills. The study population included all the students in the second grade ofpublic secondary school in the city of Isfahan at the educational year of2009-2010. The sample size included 564 students selected using thecluster random sampling method. Data collection was conducted usingQuestionnaire of Addiction to Computer Games and Social SkillsQuestionnaire (The Teenage Inventory of Social Skill or TISS). The results of the study showed that generally, there was a significantdifference between the social skills of students addicted to computer gamesand normal students. In addition, the results indicated that normal studentshad a higher level of social skills in comparison with students addicted tocomputer games. As the study results showed, addiction to computer games may affectthe quality and quantity of social skills. In other words, the higher theaddiction to computer games, the less the social skills. The individualsaddicted to computer games have less social skills.).

  10. Comparing the Social Skills of Students Addicted to Computer Games with Normal Students

    PubMed Central

    Zamani, Eshrat; Kheradmand, Ali; Cheshmi, Maliheh; Abedi, Ahmad; Hedayati, Nasim

    2010-01-01

    Background This study aimed to investigate and compare the social skills of studentsaddicted to computer games with normal students. The dependentvariable in the present study is the social skills. Methods The study population included all the students in the second grade ofpublic secondary school in the city of Isfahan at the educational year of2009-2010. The sample size included 564 students selected using thecluster random sampling method. Data collection was conducted usingQuestionnaire of Addiction to Computer Games and Social SkillsQuestionnaire (The Teenage Inventory of Social Skill or TISS). Findings The results of the study showed that generally, there was a significantdifference between the social skills of students addicted to computer gamesand normal students. In addition, the results indicated that normal studentshad a higher level of social skills in comparison with students addicted tocomputer games. Conclusion As the study results showed, addiction to computer games may affectthe quality and quantity of social skills. In other words, the higher theaddiction to computer games, the less the social skills. The individualsaddicted to computer games have less social skills.). PMID:24494102

  11. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  12. Medical students’ attitudes and perspectives regarding novel computer-based practical spot tests compared to traditional practical spot tests

    PubMed Central

    Wijerathne, Buddhika; Rathnayake, Geetha

    2013-01-01

    Background Most universities currently practice traditional practical spot tests to evaluate students. However, traditional methods have several disadvantages. Computer-based examination techniques are becoming more popular among medical educators worldwide. Therefore incorporating the computer interface in practical spot testing is a novel concept that may minimize the shortcomings of traditional methods. Assessing students’ attitudes and perspectives is vital in understanding how students perceive the novel method. Methods One hundred and sixty medical students were randomly allocated to either a computer-based spot test (n=80) or a traditional spot test (n=80). The students rated their attitudes and perspectives regarding the spot test method soon after the test. The results were described comparatively. Results Students had higher positive attitudes towards the computer-based practical spot test compared to the traditional spot test. Their recommendations to introduce the novel practical spot test method for future exams and to other universities were statistically significantly higher. Conclusions The computer-based practical spot test is viewed as more acceptable to students than the traditional spot test. PMID:26451213

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  14. A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.

    1996-01-01

    This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.

  15. Management of health care expenditure by soft computing methodology

    NASA Astrophysics Data System (ADS)

    Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad

    2017-01-01

    In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.

  16. Preliminary Computational Analysis of the (HIRENASD) Configuration in Preparation for the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Florance, Jennifer P.; Heeg, Jennifer; Wieseman, Carol D.; Perry, Boyd P.

    2011-01-01

    This paper presents preliminary computational aeroelastic analysis results generated in preparation for the first Aeroelastic Prediction Workshop (AePW). These results were produced using FUN3D software developed at NASA Langley and are compared against the experimental data generated during the HIgh REynolds Number Aero- Structural Dynamics (HIRENASD) Project. The HIRENASD wind-tunnel model was tested in the European Transonic Windtunnel in 2006 by Aachen University0s Department of Mechanics with funding from the German Research Foundation. The computational effort discussed here was performed (1) to obtain a preliminary assessment of the ability of the FUN3D code to accurately compute physical quantities experimentally measured on the HIRENASD model and (2) to translate the lessons learned from the FUN3D analysis of HIRENASD into a set of initial guidelines for the first AePW, which includes test cases for the HIRENASD model and its experimental data set. This paper compares the computational and experimental results obtained at Mach 0.8 for a Reynolds number of 7 million based on chord, corresponding to the HIRENASD test conditions No. 132 and No. 159. Aerodynamic loads and static aeroelastic displacements are compared at two levels of the grid resolution. Harmonic perturbation numerical results are compared with the experimental data using the magnitude and phase relationship between pressure coefficients and displacement. A dynamic aeroelastic numerical calculation is presented at one wind-tunnel condition in the form of the time history of the generalized displacements. Additional FUN3D validation results are also presented for the AGARD 445.6 wing data set. This wing was tested in the Transonic Dynamics Tunnel and is commonly used in the preliminary benchmarking of computational aeroelastic software.

  17. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models.

    PubMed

    Beard, Brian B; Kainz, Wolfgang

    2004-10-13

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.

  18. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models

    PubMed Central

    Beard, Brian B; Kainz, Wolfgang

    2004-01-01

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601

  19. Computational Analysis of a Prototype Martian Rotorcraft Experiment

    NASA Technical Reports Server (NTRS)

    Corfeld, Kelly J.; Strawn, Roger C.; Long, Lyle N.

    2002-01-01

    This paper presents Reynolds-averaged Navier-Stokes calculations for a prototype Martian rotorcraft. The computations are intended for comparison with an ongoing Mars rotor hover test at NASA Ames Research Center. These computational simulations present a new and challenging problem, since rotors that operate on Mars will experience a unique low Reynolds number and high Mach number environment. Computed results for the 3-D rotor differ substantially from 2-D sectional computations in that the 3-D results exhibit a stall delay phenomenon caused by rotational forces along the blade span. Computational results have yet to be compared to experimental data, but computed performance predictions match the experimental design goals fairly well. In addition, the computed results provide a high level of detail in the rotor wake and blade surface aerodynamics. These details provide an important supplement to the expected experimental performance data.

  20. Retroperitoneal tumour radiotherapy: clinical improvements using kilovoltage cone beam computed tomography.

    PubMed

    Juan-Senabre, Xavier J; Ferrer-Albiach, Carlos; Rodríguez-Cordón, Marta; Santos-Serra, Agustín; López-Tarjuelo, Juan; Calzada-Feliu, Salvador

    2009-04-01

    We present a clinical case of a patient diagnosed with a retroperitoneal sarcoma, which received preoperative treatment with daily verification via computed tomography obtained with kilovoltage cone beam. We compare the benefit of this treatment compared to other conventional treatment without image guiding, reporting quantitative results.

  1. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures.

    PubMed

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.

  2. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures

    PubMed Central

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results. PMID:29765315

  3. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  4. Computational Modeling for the Flow Over a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun

    1999-01-01

    The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.

  5. Validation of numerical models for flow simulation in labyrinth seals

    NASA Astrophysics Data System (ADS)

    Frączek, D.; Wróblewski, W.

    2016-10-01

    CFD results were compared with the results of experiments for the flow through the labyrinth seal. RANS turbulence models (k-epsilon, k-omega, SST and SST-SAS) were selected for the study. Steady and transient results were analyzed. ANSYS CFX was used for numerical computation. The analysis included flow through sealing section with the honeycomb land. Leakage flows and velocity profiles in the seal were compared. In addition to the comparison of computational models, the divergence of modeling and experimental results has been determined. Tips for modeling these problems were formulated.

  6. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment.

    PubMed

    Boevé, Anja J; Meijer, Rob R; Albers, Casper J; Beetsma, Yta; Bosker, Roel J

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.

  7. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  8. Comparison of diagnosis of early retinal lesions of diabetic retinopathy between a computer system and human experts.

    PubMed

    Lee, S C; Lee, E T; Kingsley, R M; Wang, Y; Russell, D; Klein, R; Warn, A

    2001-04-01

    To investigate whether a computer vision system is comparable with humans in detecting early retinal lesions of diabetic retinopathy using color fundus photographs. A computer system has been developed using image processing and pattern recognition techniques to detect early lesions of diabetic retinopathy (hemorrhages and microaneurysms, hard exudates, and cotton-wool spots). Color fundus photographs obtained from American Indians in Oklahoma were used in developing and testing the system. A set of 369 color fundus slides were used to train the computer system using 3 diagnostic categories: lesions present, questionable, or absent (Y/Q/N). A different set of 428 slides were used to test and evaluate the system, and its diagnostic results were compared with those of 2 human experts-the grader at the University of Wisconsin Fundus Photograph Reading Center (Madison) and a general ophthalmologist. The experiments included comparisons using 3 (Y/Q/N) and 2 diagnostic categories (Y/N) (questionable cases excluded in the latter). In the training phase, the agreement rates, sensitivity, and specificity in detecting the 3 lesions between the retinal specialist and the computer system were all above 90%. The kappa statistics were high (0.75-0.97), indicating excellent agreement between the specialist and the computer system. In the testing phase, the results obtained between the computer system and human experts were consistent with those of the training phase, and they were comparable with those between the human experts. The performance of the computer vision system in diagnosing early retinal lesions was comparable with that of human experts. Therefore, this mobile, electronically easily accessible, and noninvasive computer system, could become a mass screening tool and a clinical aid in diagnosing early lesions of diabetic retinopathy.

  9. DSS 14 64-meter antenna. Computed RF pathlength changes under gravity loadings

    NASA Technical Reports Server (NTRS)

    Katow, M. S.

    1981-01-01

    Using a computer model of the reflector structure and its supporting assembly of the 64-m antenna rotating about the elevation axis, the radio frequency (RF) pathlengths changes resulting from gravity loadings were computed. A check on the computed values was made by comparing the computed foci offsets with actual field readings of the Z or axial focussing required for elevation angle changes.

  10. A Computational Study of an Oscillating VR-12 Airfoil with a Gurney Flap

    NASA Technical Reports Server (NTRS)

    Rhee, Myung

    2004-01-01

    Computations of the flow over an oscillating airfoil with a Gurney-flap are performed using a Reynolds Averaged Navier-Stokes code and compared with recent experimental data. The experimental results have been generated for different sizes of the Gurney flaps. The computations are focused mainly on a configuration. The baseline airfoil without a Gurney flap is computed and compared with the experiments in both steady and unsteady cases for the purpose of initial testing of the code performance. The are carried out with different turbulence models. Effects of the grid refinement are also examined and unsteady cases, in addition to the assessment of solver effects. The results of the comparisons of steady lift and drag computations indicate that the code is reasonably accurate in the attached flow of the steady condition but largely overpredicts the lift and underpredicts the drag in the higher angle steady flow.

  11. Computational flow predictions for hypersonic drag devices

    NASA Technical Reports Server (NTRS)

    Tokarcik, Susan A.; Venkatapathy, Ethiraj

    1993-01-01

    The effectiveness of two types of hypersonic decelerators is examined: mechanically deployable flares and inflatable ballutes. Computational fluid dynamics (CFD) is used to predict the flowfield around a solid rocket motor (SRM) with a deployed decelerator. The computations are performed with an ideal gas solver using an effective specific heat ratio of 1.15. The results from the ideal gas solver are compared to computational results from a thermochemical nonequilibrium solver. The surface pressure coefficient, the drag, and the extend of the compression corner separation zone predicted by the ideal gas solver compare well with those predicted by the nonequilibrium solver. The ideal gas solver is computationally inexpensive and is shown to be well suited for preliminary design studies. The computed solutions are used to determine the size and shape of the decelerator that are required to achieve a drag coefficient of 5. Heat transfer rates to the SRM and the decelerators are predicted to estimate the amount of thermal protection required.

  12. Navier-Stokes Computations With One-Equation Turbulence Model for Flows Along Concave Wall Surfaces

    NASA Technical Reports Server (NTRS)

    Wang, Chi R.

    2005-01-01

    This report presents the use of a time-marching three-dimensional compressible Navier-Stokes equation numerical solver with a one-equation turbulence model to simulate the flow fields developed along concave wall surfaces without and with a downstream extension flat wall surface. The 3-D Navier- Stokes numerical solver came from the NASA Glenn-HT code. The one-equation turbulence model was derived from the Spalart and Allmaras model. The computational approach was first calibrated with the computations of the velocity and Reynolds shear stress profiles of a steady flat plate boundary layer flow. The computational approach was then used to simulate developing boundary layer flows along concave wall surfaces without and with a downstream extension wall. The author investigated the computational results of surface friction factors, near surface velocity components, near wall temperatures, and a turbulent shear stress component in terms of turbulence modeling, computational mesh configurations, inlet turbulence level, and time iteration step. The computational results were compared with existing measurements of skin friction factors, velocity components, and shear stresses of the developing boundary layer flows. With a fine computational mesh and a one-equation model, the computational approach could predict accurately the skin friction factors, near surface velocity and temperature, and shear stress within the flows. The computed velocity components and shear stresses also showed the vortices effect on the velocity variations over a concave wall. The computed eddy viscosities at the near wall locations were also compared with the results from a two equation turbulence modeling technique. The inlet turbulence length scale was found to have little effect on the eddy viscosities at locations near the concave wall surface. The eddy viscosities, from the one-equation and two-equation modeling, were comparable at most stream-wise stations. The present one-equation turbulence model is an effective approach for turbulence modeling in the near solid wall surface region of flow over a concave wall.

  13. Validation of a Computational Fluid Dynamics (CFD) Code for Supersonic Axisymmetric Base Flow

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin

    1993-01-01

    The ability to accurately and efficiently calculate the flow structure in the base region of bodies of revolution in supersonic flight is a significant step in CFD code validation for applications ranging from base heating for rockets to drag for protectives. The FDNS code is used to compute such a flow and the results are compared to benchmark quality experimental data. Flowfield calculations are presented for a cylindrical afterbody at M = 2.46 and angle of attack a = O. Grid independent solutions are compared to mean velocity profiles in the separated wake area and downstream of the reattachment point. Additionally, quantities such as turbulent kinetic energy and shear layer growth rates are compared to the data. Finally, the computed base pressures are compared to the measured values. An effort is made to elucidate the role of turbulence models in the flowfield predictions. The level of turbulent eddy viscosity, and its origin, are used to contrast the various turbulence models and compare the results to the experimental data.

  14. Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup

    PubMed Central

    Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.

    2010-01-01

    Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651

  15. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  16. Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabha, H.; Marleau, G.

    2012-07-01

    For computations of fluxes, we have used Carvik's method of collision probabilities. This method requires tracking algorithms. An algorithm to compute tracks (in 2D and 3D) has been developed for seven hexagonal geometries with cluster of fuel pins. This has been implemented in the NXT module of the code DRAGON. The flux distribution in cluster of pins has been computed by using this code. For testing the results, they are compared when possible with the EXCELT module of the code DRAGON. Tracks are plotted in the NXT module by using MATLAB, these plots are also presented here. Results are presentedmore » with increasing number of lines to show the convergence of these results. We have numerically computed volumes, surface areas and the percentage errors in these computations. These results show that 2D results converge faster than 3D results. The accuracy on the computation of fluxes up to second decimal is achieved with fewer lines. (authors)« less

  17. Thermodynamic and transport properties of nitrogen fluid: Molecular theory and computer simulations

    NASA Astrophysics Data System (ADS)

    Eskandari Nasrabad, A.; Laghaei, R.

    2018-04-01

    Computer simulations and various theories are applied to compute the thermodynamic and transport properties of nitrogen fluid. To model the nitrogen interaction, an existing potential in the literature is modified to obtain a close agreement between the simulation results and experimental data for the orthobaric densities. We use the Generic van der Waals theory to calculate the mean free volume and apply the results within the modified Cohen-Turnbull relation to obtain the self-diffusion coefficient. Compared to experimental data, excellent results are obtained via computer simulations for the orthobaric densities, the vapor pressure, the equation of state, and the shear viscosity. We analyze the results of the theory and computer simulations for the various thermophysical properties.

  18. A Full Navier-Stokes Analysis of Subsonic Diffuser of a Bifurcated 70/30 Supersonic Inlet for High Speed Civil Transport Application

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A full Navier-Stokes analysis was performed to evaluate the performance of the subsonic diffuser of a NASA Lewis Research Center 70/30 mixed-compression bifurcated supersonic inlet for high speed civil transport application. The PARC3D code was used in the present study. The computations were also performed when approximately 2.5 percent of the engine mass flow was allowed to bypass through the engine bypass doors. The computational results were compared with the available experimental data which consisted of detailed Mach number and total pressure distribution along the entire length of the subsonic diffuser. The total pressure recovery, flow distortion, and crossflow velocity at the engine face were also calculated. The computed surface ramp and cowl pressure distributions were compared with experiments. Overall, the computational results compared well with experimental data. The present CFD analysis demonstrated that the bypass flow improves the total pressure recovery and lessens flow distortions at the engine face.

  19. Analyses of ACPL thermal/fluid conditioning system

    NASA Technical Reports Server (NTRS)

    Stephen, L. A.; Usher, L. H.

    1976-01-01

    Results of engineering analyses are reported. Initial computations were made using a modified control transfer function where the systems performance was characterized parametrically using an analytical model. The analytical model was revised to represent the latest expansion chamber fluid manifold design, and systems performance predictions were made. Parameters which were independently varied in these computations are listed. Systems predictions which were used to characterize performance are primarily transient computer plots comparing the deviation between average chamber temperature and the chamber temperature requirement. Additional computer plots were prepared. Results of parametric computations with the latest fluid manifold design are included.

  20. Do Mathematicians Integrate Computer Algebra Systems in University Teaching? Comparing a Literature Review to an International Survey Study

    ERIC Educational Resources Information Center

    Marshall, Neil; Buteau, Chantal; Jarvis, Daniel H.; Lavicza, Zsolt

    2012-01-01

    We present a comparative study of a literature review of 326 selected contributions (Buteau, Marshall, Jarvis & Lavicza, 2010) to an international (US, UK, Hungary) survey of mathematicians (Lavicza, 2008) regarding the use of Computer Algebra Systems (CAS) in post-secondary mathematics education. The comparison results are organized with respect…

  1. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment

    PubMed Central

    Boevé, Anja J.; Meijer, Rob R.; Albers, Casper J.; Beetsma, Yta; Bosker, Roel J.

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration. PMID:26641632

  2. Inviscid Flow Computations of the Orbital Sciences X-34 Over a Mach Number Range of 1.25 to 6.0

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    2001-01-01

    This report documents the results of an inviscid computational study conducted on the Orbital Sciences X-34 vehicle to compute its inviscid longitudinal aerodynamic characteristics over a Mach number range of 1.25 to 6.0. The unstructured grid software FELISA was used and th e aerodynamic characteristics were computed at Mach numbers 1.25, 1.6, 2.5, 4.0, 4.63, and 6.0, and an angle of attack range of -4 to 32 degrees. These results were compared with available aerodynamic data from wind tunnel test on X-34 models. The comparison showed excellent agreement in C(sub N). The computed pitching moment compared well at Mach numbers 2.5 and higher, and at angles of attack of up to 12 deg. The agreement was not good at higher angles of attack possibly due to viscous effects. At lower Mach numbers there were significant differences between computed and measured C(sub m) values. This could not be explained. Since the present computations are inviscid, the computed C(sub A) was consistently lower than the measured values as expected.

  3. [The comparison of the expansion of polyps according to the Ki-67 and computed tomography scores].

    PubMed

    Aydin, Sedat; Sanli, Arif; Tezer, Ilter; Hardal, Umit; Barişik, Nagehan Ozdemir

    2009-01-01

    The disease extention in nasal polyps was compared by using the mitotic activity rates and the computed tomography scores. This study was conducted on 19 nasal polyposis patients (8 males, 11 females; mean age 40.0+/-13.7 years; range 20 to 63 years). The preoperative computed tomography records of the patients were evaluated according to the Lund-Mackay grading system. The polyp tissues of the same patients were stained with the Ki-67 antigen for immunohistochemical evaluation. The correlation between the radiologic results and the Ki-67 values was compared by means of the Spearman's correlation test. The mean computed tomography score was observed as 14.3+/-4.7 (range 7-24). The mean Ki-67 score resulting from the immunohistochemical staining was calculated as 24.3+/-18.5 (range 3.3-73.5%). A significant correlation was determined between the Ki-67 values and the computed tomography scores. ("Spearman's" correlation factor: 0.677; p<0.001). As the mitotic activity rate of nasal polyps increases, both the volume of the polyps and the computed tomography scores increase as a result of the blockage of the sinus ostiums by the increased polyp volume.

  4. Context as Support for Learning Computer Organization

    ERIC Educational Resources Information Center

    Tew, Allison Elliott; Dorn, Brian; Leahy, William D., Jr.; Guzdial, Mark

    2008-01-01

    The ubiquity of personal computational devices in the lives of today's students presents a meaningful context for courses in computer organization beyond the general-purpose or imaginary processors routinely used. This article presents results of a comparative study examining student performance in a conventional organization course and in one…

  5. Evaluating the Effectiveness of an Interactive Multimedia Computer-based Patient Education Program in Cardiac Rehabilitation.

    ERIC Educational Resources Information Center

    Jenny, Ng Yuen Yee; Fai, Tam Sing

    2001-01-01

    A study compared 48 cardiac patients who used an interactive multimedia computer-assisted patient education program and 48 taught by tutorial. The computer-assisted instructional method resulted in significantly better knowledge about exercise and self-management of chronic diseases. (Contains 29 references.) (JOW)

  6. Direct Numerical Simulation of Liquid Nozzle Spray with Comparison to Shadowgraphy and X-Ray Computed Tomography Experimental Results

    NASA Astrophysics Data System (ADS)

    van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis

    2014-11-01

    In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.

  7. Bringing MapReduce Closer To Data With Active Drives

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.

    2017-12-01

    Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.

  8. Subsonic Analysis of 0.04-Scale F-16XL Models Using an Unstructured Euler Code

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    1996-01-01

    The subsonic flow field about an F-16XL airplane model configuration was investigated with an inviscid unstructured grid technique. The computed surface pressures were compared to wind-tunnel test results at Mach 0.148 for a range of angles of attack from 0 deg to 20 deg. To evaluate the effect of grid dependency on the solution, a grid study was performed in which fine, medium, and coarse grid meshes were generated. The off-surface vortical flow field was locally adapted and showed improved correlation to the wind-tunnel data when compared to the nonadapted flow field. Computational results are also compared to experimental five-hole pressure probe data. A detailed analysis of the off-body computed pressure contours, velocity vectors, and particle traces are presented and discussed.

  9. Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel

    2013-01-01

    This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.

  10. Inviscid Flow Computations of the Shuttle Orbiter for Mach 10 and 15 and Angle of Attack 40 to 60 Degrees

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Sutton, Kenneth (Technical Monitor)

    2001-01-01

    This report documents the results of a computational study done to compute the inviscid longitudinal aerodynamic characteristics of the Space Shuttle Orbiter for Mach numbers 10 and 15 at angles of attack of 40, 50, 55, and 60 degrees. These computations were done to provide limited aerodynamic data in support of the Orbiter contingency abort task. The Orbiter had all the control surfaces in the undeflected position. The unstructured grid software FELISA was used for these computations with the equilibrium air option. Normal and axial force coefficients and pitching moment coefficients were computed. The hinge moment coefficients of the body flap and the inboard and outboard elevons were also computed. These results were compared with Orbiter Air Data Book (OADB) data and those computed using GASP. The comparison with the GASP results showed very good agreement in Cm and Ca at all the points. The computed axial force coefficients were smaller than those computed by GASP. There were noticeable differences between the present results and those in the OADB at angles of attack greater than 50 degrees.

  11. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    PubMed Central

    Wen, Qiaoyan; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function. PMID:24982949

  12. Two-cloud-servers-assisted secure outsourcing multiparty computation.

    PubMed

    Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  13. Estimations of global warming potentials from computational chemistry calculations for CH(2)F(2) and other fluorinated methyl species verified by comparison to experiment.

    PubMed

    Blowers, Paul; Hollingshead, Kyle

    2009-05-21

    In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.

  14. Comparison of Hydrodynamic Load Predictions Between Engineering Models and Computational Fluid Dynamics for the OC4-DeepCwind Semi-Submersible: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.

    Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptionsmore » in HydroDyn are evaluated based on this code-to-code comparison.« less

  15. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  16. Experimental and Computational Study of Sonic and Supersonic Jet Plumes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; Naughton, J. W.; Fletcher, D. G.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    Study of sonic and supersonic jet plumes are relevant to understanding such phenomenon as jet-noise, plume signatures, and rocket base-heating and radiation. Jet plumes are simple to simulate and yet, have complex flow structures such as Mach disks, triple points, shear-layers, barrel shocks, shock-shear-layer interaction, etc. Experimental and computational simulation of sonic and supersonic jet plumes have been performed for under- and over-expanded, axisymmetric plume conditions. The computational simulation compare very well with the experimental observations of schlieren pictures. Experimental data such as temperature measurements with hot-wire probes are yet to be measured and will be compared with computed values. Extensive analysis of the computational simulations presents a clear picture of how the complex flow structure develops and the conditions under which self-similar flow structures evolve. From the computations, the plume structure can be further classified into many sub-groups. In the proposed paper, detail results from the experimental and computational simulations for single, axisymmetric, under- and over-expanded, sonic and supersonic plumes will be compared and the fluid dynamic aspects of flow structures will be discussed.

  17. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  18. Comparison of hand and semiautomatic tracing methods for creating maxillofacial artificial organs using sequences of computed tomography (CT) and cone beam computed tomography (CBCT) images.

    PubMed

    Szabo, Bence T; Aksoy, Seçil; Repassy, Gabor; Csomo, Krisztian; Dobo-Nagy, Csaba; Orhan, Kaan

    2017-06-09

    The aim of this study was to compare the paranasal sinus volumes obtained by manual and semiautomatic imaging software programs using both CT and CBCT imaging. 121 computed tomography (CT) and 119 cone beam computed tomography (CBCT) examinations were selected from the databases of the authors' institutes. The Digital Imaging and Communications in Medicine (DICOM) images were imported into 3-dimensonal imaging software, in which hand mode and semiautomatic tracing methods were used to measure the volumes of both maxillary sinuses and the sphenoid sinus. The determined volumetric means were compared to previously published averages. Isometric CBCT-based volume determination results were closer to the real volume conditions, whereas the non-isometric CT-based volume measurements defined coherently lower volumes. By comparing the 2 volume measurement modes, the values gained from hand mode were closer to the literature data. Furthermore, CBCT-based image measurement results corresponded to the known averages. Our results suggest that CBCT images provide reliable volumetric information that can be depended on for artificial organ construction, and which may aid the guidance of the operator prior to or during the intervention.

  19. Three Dimensional Aerodynamic Analysis of a High-Lift Transport Configuration

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1993-01-01

    Two computational methods, a surface panel method and an Euler method employing unstructured grid methodology, were used to analyze a subsonic transport aircraft in cruise and high-lift conditions. The computational results were compared with two separate sets of flight data obtained for the cruise and high-lift configurations. For the cruise configuration, the surface pressures obtained by the panel method and the Euler method agreed fairly well with results from flight test. However, for the high-lift configuration considerable differences were observed when the computational surface pressures were compared with the results from high-lift flight test. On the lower surface of all the elements with the exception of the slat, both the panel and Euler methods predicted pressures which were in good agreement with flight data. On the upper surface of all the elements the panel method predicted slightly higher suction compared to the Euler method. On the upper surface of the slat, pressure coefficients obtained by both the Euler and panel methods did not agree with the results of the flight tests. A sensitivity study of the upward deflection of the slat from the 40 deg. flap setting suggested that the differences in the slat deflection between the computational model and the flight configuration could be one of the sources of this discrepancy. The computation time for the implicit version of the Euler code was about 1/3 the time taken by the explicit version though the implicit code required 3 times the memory taken by the explicit version.

  20. Computational Model Tracking Primary Electrons, Secondary Electrons, and Ions in the Discharge Chamber of an Ion Engine

    NASA Technical Reports Server (NTRS)

    Mahalingam, Sudhakar; Menart, James A.

    2005-01-01

    Computational modeling of the plasma located in the discharge chamber of an ion engine is an important activity so that the development and design of the next generation of ion engines may be enhanced. In this work a computational tool called XOOPIC is used to model the primary electrons, secondary electrons, and ions inside the discharge chamber. The details of this computational tool are discussed in this paper. Preliminary results from XOOPIC are presented. The results presented include particle number density distributions for the primary electrons, the secondary electrons, and the ions. In addition the total number of a particular particle in the discharge chamber as a function of time, electric potential maps and magnetic field maps are presented. A primary electron number density plot from PRIMA is given in this paper so that the results of XOOPIC can be compared to it. PRIMA is a computer code that the present investigators have used in much of their previous work that provides results that compare well to experimental results. PRIMA only models the primary electrons in the discharge chamber. Modeling ions and secondary electrons, as well as the primary electrons, will greatly increase our ability to predict different characteristics of the plasma discharge used in an ion engine.

  1. Comparison of Computational Results with a Low-g, Nitrogen Slosh and Boiling Experiment

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E.; Moder, Jeffrey P.

    2015-01-01

    This paper compares a fluid/thermal simulation, in Fluent, with a low-g, nitrogen slosh and boiling experiment. In 2010, the French Space Agency, CNES, performed cryogenic nitrogen experiments in a low-g aircraft campaign. From one parabolic flight, a low-g interval was simulated that focuses on low-g motion of nitrogen liquid and vapor with significant condensation, evaporation, and boiling. The computational results are compared with high-speed video, pressure data, heat transfer, and temperature data from sensors on the axis of the cylindrically shaped tank. These experimental and computational results compare favorably. The initial temperature stratification is in good agreement, and the two-phase fluid motion is qualitatively captured. Temperature data is matched except that the temperature sensors are unable to capture fast temperature transients when the sensors move from wet to dry (liquid to vapor) operation. Pressure evolution is approximately captured, but condensation and evaporation rate modeling and prediction need further theoretical analysis.

  2. Reversible Data Hiding Based on DNA Computing

    PubMed Central

    Xie, Yingjie

    2017-01-01

    Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security. PMID:28280504

  3. Physician Utilization of a Hospital Information System: A Computer Simulation Model

    PubMed Central

    Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.

    1988-01-01

    The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.

  4. Computer-Based Instruction and Health Professions Education: A Meta-Analysis of Outcomes.

    ERIC Educational Resources Information Center

    Cohen, Peter A.; Dacanay, Lakshmi S.

    1992-01-01

    The meta-analytic techniques of G. V. Glass were used to statistically integrate findings from 47 comparative studies on computer-based instruction (CBI) in health professions education. A clear majority of the studies favored CBI over conventional methods of instruction. Results show higher-order applications of computers to be especially…

  5. An Intervention Study on Mental Computation for Second Graders in Taiwan

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Huang, Ke-Lun

    2014-01-01

    The authors compared the mental computation performance and mental strategies used by an experimental Grade 2 class and a control Grade 2 class before and after instructional intervention. Results indicate that students in the experimental group had better performance on mental computation. The use of mental strategies (counting, separation,…

  6. CBESW: Sequence Alignment on the Playstation 3

    PubMed Central

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-01-01

    Background The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation® 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. Results For large datasets, our implementation on the PlayStation® 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. Conclusion The results from our experiments demonstrate that the PlayStation® 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications. PMID:18798993

  7. Effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing.

    PubMed

    Yoo, Won-Gyu

    2015-01-01

    [Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.

  8. Benchmarking comparison and validation of MCNP photon interaction data

    NASA Astrophysics Data System (ADS)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  9. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    PubMed Central

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418

  10. Analysis of reaction cross-section production in neutron induced fission reactions on uranium isotope using computer code COMPLET.

    PubMed

    Asres, Yihunie Hibstie; Mathuthu, Manny; Birhane, Marelgn Derso

    2018-04-22

    This study provides current evidence about cross-section production processes in the theoretical and experimental results of neutron induced reaction of uranium isotope on projectile energy range of 1-100 MeV in order to improve the reliability of nuclear stimulation. In such fission reactions of 235 U within nuclear reactors, much amount of energy would be released as a product that able to satisfy the needs of energy to the world wide without polluting processes as compared to other sources. The main objective of this work is to transform a related knowledge in the neutron-induced fission reactions on 235 U through describing, analyzing and interpreting the theoretical results of the cross sections obtained from computer code COMPLET by comparing with the experimental data obtained from EXFOR. The cross section value of 235 U(n,2n) 234 U, 235 U(n,3n) 233 U, 235 U(n,γ) 236 U, 235 U(n,f) are obtained using computer code COMPLET and the corresponding experimental values were browsed by EXFOR, IAEA. The theoretical results are compared with the experimental data taken from EXFOR Data Bank. Computer code COMPLET has been used for the analysis with the same set of input parameters and the graphs were plotted by the help of spreadsheet & Origin-8 software. The quantification of uncertainties stemming from both experimental data and computer code calculation plays a significant role in the final evaluated results. The calculated results for total cross sections were compared with the experimental data taken from EXFOR in the literature, and good agreement was found between the experimental and theoretical data. This comparison of the calculated data was analyzed and interpreted with tabulation and graphical descriptions, and the results were briefly discussed within the text of this research work. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Results from the direct combination of satellite and gravimetric data. [orbit analysis and gravity anomalies

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1974-01-01

    Results have been obtained for the solution of 184 15-deg equal-area blocks directly from the analysis of satellite orbits, and from a combination of the satellite results with terrestrial gravity material. This test computation, made to verify the method, used 17,632 optical observations from ten satellites in 29 arcs averaging in length seven days. Analysis of the satellite results were made by comparing the solved for anomalies with the terrestrial anomaly set, and by developing the solved for anomalies into potential coefficients which were compared to the GEM 3 set of potential coefficients to degree 12. These comparisons indicated improvement in each solution as more arcs were added. The programs used in this solution can easily be used to solve for smaller size blocks and handle additional data types. The only limitation will be computer core availability and computer time.

  12. Results of Computer Based Training.

    ERIC Educational Resources Information Center

    1978

    This report compares the projected savings of using computer based training to conduct training for newly hired pilots to the results of that application. New Hire training, one of a number of programs conducted continuously at the United Airline Flight Operations Training Center, is designed to assure that any newly hired pilot will be able to…

  13. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  14. Design and development of an automated, portable and handheld tablet personal computer-based data acquisition system for monitoring electromyography signals during rehabilitation.

    PubMed

    Ahamed, Nizam U; Sundaraj, Kenneth; Poo, Tarn S

    2013-03-01

    This article describes the design of a robust, inexpensive, easy-to-use, small, and portable online electromyography acquisition system for monitoring electromyography signals during rehabilitation. This single-channel (one-muscle) system was connected via the universal serial bus port to a programmable Windows operating system handheld tablet personal computer for storage and analysis of the data by the end user. The raw electromyography signals were amplified in order to convert them to an observable scale. The inherent noise of 50 Hz (Malaysia) from power lines electromagnetic interference was then eliminated using a single-hybrid IC notch filter. These signals were sampled by a signal processing module and converted into 24-bit digital data. An algorithm was developed and programmed to transmit the digital data to the computer, where it was reassembled and displayed in the computer using software. Finally, the following device was furnished with the graphical user interface to display the online muscle strength streaming signal in a handheld tablet personal computer. This battery-operated system was tested on the biceps brachii muscles of 20 healthy subjects, and the results were compared to those obtained with a commercial single-channel (one-muscle) electromyography acquisition system. The results obtained using the developed device when compared to those obtained from a commercially available physiological signal monitoring system for activities involving muscle contractions were found to be comparable (the comparison of various statistical parameters) between male and female subjects. In addition, the key advantage of this developed system over the conventional desktop personal computer-based acquisition systems is its portability due to the use of a tablet personal computer in which the results are accessible graphically as well as stored in text (comma-separated value) form.

  15. Computer System Performance Measurement Techniques for ARTS III Computer Systems

    DOT National Transportation Integrated Search

    1973-12-01

    The potential contribution of direct system measurement in the evolving ARTS 3 Program is discussed and software performance measurement techniques are comparatively assessed in terms of credibility of results, ease of implementation, volume of data,...

  16. Assessment of CFD Estimation of Aerodynamic Characteristics of Basic Reusable Rocket Configurations

    NASA Astrophysics Data System (ADS)

    Fujimoto, Keiichiro; Fujii, Kozo

    Flow-fields around the basic SSTO-rocket configurations are numerically simulated by the Reynolds-averaged Navier-Stokes (RANS) computations. Simulations of the Apollo-like configuration is first carried out, where the results are compared with NASA experiments and the prediction ability of the RANS simulation is discussed. The angle of attack of the freestream ranges from 0° to 180° and the freestream Mach number ranges from 0.7 to 2.0. Computed aerodynamic coefficients for the Apollo-like configuration agree well with the experiments under a wide range of flow conditions. The flow simulations around the slender Apollo-type configuration are carried out next and the results are compared with the experiments. Computed aerodynamic coefficients also agree well with the experiments. Flow-fields are dominated by the three-dimensional massively separated flow, which should be captured for accurate aerodynamic prediction. Grid refinement effects on the computed aerodynamic coefficients are investigated comprehensively.

  17. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  18. Numerical simulations of the flow with the prescribed displacement of the airfoil and comparison with experiment

    NASA Astrophysics Data System (ADS)

    Řidký, V.; Šidlof, P.; Vlček, V.

    2013-04-01

    The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.

  19. Imaging of the midpalatal suture in a porcine model: flat-panel volume computed tomography compared with multislice computed tomography.

    PubMed

    Hahn, Wolfram; Fricke-Zech, Susanne; Fialka-Fricke, Julia; Dullin, Christian; Zapf, Antonia; Gruber, Rudolf; Sennhenn-kirchner, Sabine; Kubein-Meesenburg, Dietmar; Sadat-Khonsari, Reza

    2009-09-01

    An investigation was conducted to compare the image quality of prototype flat-panel volume computed tomography (fpVCT) and multislice computed tomography (MSCT) of suture structures. Bone samples were taken from the midpalatal suture of 5 young (16 weeks) and 5 old (200 weeks) Sus scrofa domestica and fixed in formalin solution. An fpVCT prototype and an MSCT were used to obtain images of the specimens. The facial reformations were assessed by 4 observers using a 1 (excellent) to 5 (poor) rating scale for the weighted criteria visualization of the suture structure. A linear mixed model was used for statistical analysis. Results with P < .05 were considered to be statistically significant. The visualization of the suture of young specimens was significantly better than that of older animals (P < .001). The visualization of the suture with fpVCT was significantly better than that with MSCT (P < .001). Compared with MSCT, fpVCT produces superior results in the visualization of the midpalatal suture in a Sus scrofa domestica model.

  20. Estimation of unsteady lift on a pitching airfoil from wake velocity surveys

    NASA Technical Reports Server (NTRS)

    Zaman, K. B. M. Q.; Panda, J.; Rumsey, C. L.

    1993-01-01

    The results of a joint experimental and computational study on the flowfield over a periodically pitched NACA0012 airfoil, and the resultant lift variation, are reported in this paper. The lift variation over a cycle of oscillation, and hence the lift hysteresis loop, is estimated from the velocity distribution in the wake measured or computed for successive phases of the cycle. Experimentally, the estimated lift hysteresis loops are compared with available data from the literature as well as with limited force balance measurements. Computationally, the estimated lift variations are compared with the corresponding variation obtained from the surface pressure distribution. Four analytical formulations for the lift estimation from wake surveys are considered and relative successes of the four are discussed.

  1. Objective structured clinical examination "Death Certificate" station - Computer-based versus conventional exam format.

    PubMed

    Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S

    2018-04-01

    One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  3. The Application and Impact of Computer-Generated Personalized Nutrition Education: A Review of the Literature.

    ERIC Educational Resources Information Center

    Brug, Johannes; Campbell, Marci; van Assema, Patricia

    1999-01-01

    Describes the process of providing people with computer-tailored nutrition education and reviews the studies on the impact of this type of education. Results indicate that computer-tailored nutrition education is more likely to be read, remembered, and experienced as personally relevant compared to standard materials. It also appears to have a…

  4. A method for computation of inviscid three-dimensional flow over blunt bodies having large embedded subsonic regions

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Hamilton, H. H., II

    1981-01-01

    A computational technique for computing the three-dimensional inviscid flow over blunt bodies having large regions of embedded subsonic flow is detailed. Results, which were obtained using the CDC Cyber 203 vector processing computer, are presented for several analytic shapes with some comparison to experimental data. Finally, windward surface pressure computations over the first third of the Space Shuttle vehicle are compared with experimental data for angles of attack between 25 and 45 degrees.

  5. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  6. Sonic and Supersonic Jet Plumes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; Naughton, J. W.; Flethcher, D. G.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    Study of sonic and supersonic jet plumes are relevant to understanding such phenomenon as jet-noise, plume signatures, and rocket base-heating and radiation. Jet plumes are simple to simulate and yet, have complex flow structures such as Mach disks, triple points, shear-layers, barrel shocks, shock- shear- layer interaction, etc. Experimental and computational simulation of sonic and supersonic jet plumes have been performed for under- and over-expanded, axisymmetric plume conditions. The computational simulation compare very well with the experimental observations of schlieren pictures. Experimental data such as temperature measurements with hot-wire probes are yet to be measured and will be compared with computed values. Extensive analysis of the computational simulations presents a clear picture of how the complex flow structure develops and the conditions under which self-similar flow structures evolve. From the computations, the plume structure can be further classified into many sub-groups. In the proposed paper, detail results from the experimental and computational simulations for single, axisymmetric, under- and over-expanded, sonic and supersonic plumes will be compared and the fluid dynamic aspects of flow structures will be discussed.

  7. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  8. Automatically-computed prehospital severity scores are equivalent to scores based on medic documentation.

    PubMed

    Reisner, Andrew T; Chen, Liangyou; McKenna, Thomas M; Reifman, Jaques

    2008-10-01

    Prehospital severity scores can be used in routine prehospital care, mass casualty care, and military triage. If computers could reliably calculate clinical scores, new clinical and research methodologies would be possible. One obstacle is that vital signs measured automatically can be unreliable. We hypothesized that Signal Quality Indices (SQI's), computer algorithms that differentiate between reliable and unreliable monitored physiologic data, could improve the predictive power of computer-calculated scores. In a retrospective analysis of trauma casualties transported by air ambulance, we computed the Triage Revised Trauma Score (RTS) from archived travel monitor data. We compared the areas-under-the-curve (AUC's) of receiver operating characteristic curves for prediction of mortality and red blood cell transfusion for 187 subjects with comparable quantities of good-quality and poor-quality data. Vital signs deemed reliable by SQI's led to significantly more discriminatory severity scores than vital signs deemed unreliable. We also compared automatically-computed RTS (using the SQI's) versus RTS computed from vital signs documented by medics. For the subjects in whom the SQI algorithms identified 15 consecutive seconds of reliable vital signs data (n = 350), the automatically-computed scores' AUC's were the same as the medic-based scores' AUC's. Using the Prehospital Index in place of RTS led to very similar results, corroborating our findings. SQI algorithms improve automatically-computed severity scores, and automatically-computed scores using SQI's are equivalent to medic-based scores.

  9. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  10. Results of a Research Evaluating Quality of Computer Science Education

    ERIC Educational Resources Information Center

    Záhorec, Ján; Hašková, Alena; Munk, Michal

    2012-01-01

    The paper presents the results of an international research on a comparative assessment of the current status of computer science education at the secondary level (ISCED 3A) in Slovakia, the Czech Republic, and Belgium. Evaluation was carried out based on 14 specific factors gauging the students' point of view. The authors present qualitative…

  11. Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.; Thieme, L. G.; Miao, D.

    1979-01-01

    A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.

  12. Three-body Coulomb systems using generalized angular-momentum S states

    NASA Technical Reports Server (NTRS)

    Whitten, R. C.; Sims, J. S.

    1974-01-01

    An expansion of the three-body Coulomb potential in generalized angular-momentum eigenfunctions developed earlier by one of the authors is used to compute energy eigenvalues and eigenfunctions of bound S states of three-body Coulomb systems. The results for He, H(-), e(-)e(+)e(-), and pmu(-)p are compared with the results of other computational approaches.

  13. Path-Following Solutions Of Nonlinear Equations

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Walters, Robert W.

    1989-01-01

    Report describes some path-following techniques for solution of nonlinear equations and compares with other methods. Use of multipurpose techniques applicable at more than one stage of path-following computation results in system relatively simple to understand, program, and use. Comparison of techniques with method of parametric differentiation (MPD) reveals definite advantages for path-following methods. Emphasis in investigation on multiuse techniques being applied at more than one stage of path-following computation. Incorporation of multipurpose techniques results in concise computer code relatively simple to use.

  14. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  15. Deciphering the Landauer-Büttiker Transmission Function from Single Molecule Break Junction Experiments

    NASA Astrophysics Data System (ADS)

    Reuter, Matthew; Tschudi, Stephen

    When investigating the electrical response properties of molecules, experiments often measure conductance whereas computation predicts transmission probabilities. Although the Landauer-Büttiker theory relates the two in the limit of coherent scattering through the molecule, a direct comparison between experiment and computation can still be difficult. Experimental data (specifically that from break junctions) is statistical and computational results are deterministic. Many studies compare the most probable experimental conductance with computation, but such an analysis discards almost all of the experimental statistics. In this work we develop tools to decipher the Landauer-Büttiker transmission function directly from experimental statistics and then apply them to enable a fairer comparison between experimental and computational results.

  16. A review of randomized controlled trials comparing the effectiveness of hand held computers with paper methods for data collection

    PubMed Central

    Lane, Shannon J; Heddle, Nancy M; Arnold, Emmy; Walker, Irwin

    2006-01-01

    Background Handheld computers are increasingly favoured over paper and pencil methods to capture data in clinical research. Methods This study systematically identified and reviewed randomized controlled trials (RCTs) that compared the two methods for self-recording and reporting data, and where at least one of the following outcomes was assessed: data accuracy; timeliness of data capture; and adherence to protocols for data collection. Results A comprehensive key word search of NLM Gateway's database yielded 9 studies fitting the criteria for inclusion. Data extraction was performed and checked by two of the authors. None of the studies included all outcomes. The results overall, favor handheld computers over paper and pencil for data collection among study participants but the data are not uniform for the different outcomes. Handheld computers appear superior in timeliness of receipt and data handling (four of four studies) and are preferred by most subjects (three of four studies). On the other hand, only one of the trials adequately compared adherence to instructions for recording and submission of data (handheld computers were superior), and comparisons of accuracy were inconsistent between five studies. Conclusion Handhelds are an effective alternative to paper and pencil modes of data collection; they are faster and were preferred by most users. PMID:16737535

  17. Comparison of the Hamstring Muscle Activity and Flexion-Relaxation Ratio between Asymptomatic Persons and Computer Work-related Low Back Pain Sufferers.

    PubMed

    Kim, Min-Hee; Yoo, Won-Gyu

    2013-05-01

    [Purpose] The purpose of this study was to compare the hamstring muscle (HAM) activities and flexion-relaxation ratios of an asymptomatic group and a computer work-related low back pain (LBP) group. [Subjects] For this study, we recruited 10 asymptomatic computer workers and 10 computer workers with work-related LBP. [Methods] We measured the RMS activity of each phase (flexion, full-flexion, and re-extension phase) of trunk flexion and calculated the flexion-relaxation (FR) ratio of the muscle activities of the flexion and full-flexion phases. [Results] In the computer work-related LBP group, the HAM muscle activity increased during the full-flexion phase compared to the asymptomatic group, and the FR ration was also significantly higher. [Conclusion] We thought that prolonged sitting of computer workers might cause the change in their HAM muscle activity pattern.

  18. Numerical study of a separating and reattaching flow by using Reynolds-stress tubulence closure

    NASA Technical Reports Server (NTRS)

    Amano, R. S.; Goel, P.

    1983-01-01

    The numerical study of the Reynolds-stress turbulence closure for separating, reattaching, recirculating and redeveloping flow is summarized. The calculations were made for two different closure models of pressure - strain correlation. The results were compared with the experimental data. Furthermore, these results were compared with the computations made by using the one layer and three layer treatment of k-epsilon turbulence model which were developed. Generally the computations by the Reynolds-stress model show better results than those by the k-epsilon model, in particular, some improvement was noticed in the redeveloping region of the separating and reattaching flow in a pipe with sudden expansion.

  19. Comparing In-Class and Out-of-Class Computer-Based Tests to Traditional Paper-and-Pencil Tests in Introductory Psychology Courses

    ERIC Educational Resources Information Center

    Frein, Scott T.

    2011-01-01

    This article describes three experiments comparing paper-and-pencil tests (PPTs) to computer-based tests (CBTs) in terms of test method preferences and student performance. In Experiment 1, students took tests using three methods: PPT in class, CBT in class, and CBT at the time and place of their choosing. Results indicate that test method did not…

  20. Free energy computations employing Jarzynski identity and Wang – Landau algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalyan, M. Suman, E-mail: maroju.sk@gmail.com; Murthy, K. P. N.; School of Physics, University of Hyderabad, Hyderabad, Telangana, India – 500046

    We introduce a simple method to compute free energy differences employing Jarzynski identity in conjunction with Wang – Landau algorithm. We demonstrate this method on Ising spin system by comparing the results with those obtained from canonical sampling.

  1. A computing method for sound propagation through a nonuniform jet stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  2. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    ERIC Educational Resources Information Center

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  3. Comparison of Diagnostic Accuracy of Radiation Dose-Equivalent Radiography, Multidetector Computed Tomography and Cone Beam Computed Tomography for Fractures of Adult Cadaveric Wrists

    PubMed Central

    Neubauer, Jakob; Benndorf, Matthias; Reidelbach, Carolin; Krauß, Tobias; Lampert, Florian; Zajonc, Horst; Kotter, Elmar; Langer, Mathias; Fiebich, Martin; Goerke, Sebastian M.

    2016-01-01

    Purpose To compare the diagnostic accuracy of radiography, to radiography equivalent dose multidetector computed tomography (RED-MDCT) and to radiography equivalent dose cone beam computed tomography (RED-CBCT) for wrist fractures. Methods As study subjects we obtained 10 cadaveric human hands from body donors. Distal radius, distal ulna and carpal bones (n = 100) were artificially fractured in random order in a controlled experimental setting. We performed radiation dose equivalent radiography (settings as in standard clinical care), RED-MDCT in a 320 row MDCT with single shot mode and RED-CBCT in a device dedicated to musculoskeletal imaging. Three raters independently evaluated the resulting images for fractures and the level of confidence for each finding. Gold standard was evaluated by consensus reading of a high-dose MDCT. Results Pooled sensitivity was higher in RED-MDCT with 0.89 and RED-MDCT with 0.81 compared to radiography with 0.54 (P = < .004). No significant differences were detected concerning the modalities’ specificities (with values between P = .98). Raters' confidence was higher in RED-MDCT and RED-CBCT compared to radiography (P < .001). Conclusion The diagnostic accuracy of RED-MDCT and RED-CBCT for wrist fractures proved to be similar and in some parts even higher compared to radiography. Readers are more confident in their reporting with the cross sectional modalities. Dose equivalent cross sectional computed tomography of the wrist could replace plain radiography for fracture diagnosis in the long run. PMID:27788215

  4. Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study

    PubMed Central

    Chen, Jie; Gutmark, Ephraim

    2013-01-01

    Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907

  5. Investigations for Supersonic Transports at Transonic and Supersonic Conditions

    NASA Technical Reports Server (NTRS)

    Rivers, S. Melissa B.; Owens, Lewis R.; Wahls, Richard A.

    2007-01-01

    Several computational studies were conducted as part of NASA s High Speed Research Program. Results of turbulence model comparisons from two studies on supersonic transport configurations performed during the NASA High-Speed Research program are given. The effects of grid topology and the representation of the actual wind tunnel model geometry are also investigated. Results are presented for both transonic conditions at Mach 0.90 and supersonic conditions at Mach 2.48. A feature of these two studies was the availability of higher Reynolds number wind tunnel data with which to compare the computational results. The transonic wind tunnel data was obtained in the National Transonic Facility at NASA Langley, and the supersonic data was obtained in the Boeing Polysonic Wind Tunnel. The computational data was acquired using a state of the art Navier-Stokes flow solver with a wide range of turbulence models implemented. The results show that the computed forces compare reasonably well with the experimental data, with the Baldwin-Lomax with Degani-Schiff modifications and the Baldwin-Barth models showing the best agreement for the transonic conditions and the Spalart-Allmaras model showing the best agreement for the supersonic conditions. The transonic results were more sensitive to the choice of turbulence model than were the supersonic results.

  6. A comparative study of the effects of inhibitor stub length on solid rocket motor combustion chamber pressure oscillations: RSRM at T = 80 seconds, preliminary results

    NASA Technical Reports Server (NTRS)

    Chasman, D.; Burnette, D.; Holt, J.; Farr, R.

    1992-01-01

    Results from a continuing, time-accurate computational study of the combustion gas flow inside the Space Shuttle Redesigned Solid Rocket Motor (RSRM) are presented. These computational fluid dynamic (CFD) analyses duplicate unsteady flow effects which interact in the RSRM to produce pressure oscillations, and resulting thrust oscillations, at nominally 15, 30, and 45 Hz. Results of the Navier-Stokes computations made at mean pressure and flow conditions corresponding to 80 seconds after motor ignition both with and without a protruding, rigid inhibitor at the forward joint cavity are presented here.

  7. A Study of Flow Separation in Transonic Flow Using Inviscid and Viscous Computational Fluid Dynamics (CFD) Schemes

    NASA Technical Reports Server (NTRS)

    Rhodes, J. A.; Tiwari, S. N.; Vonlavante, E.

    1988-01-01

    A comparison of flow separation in transonic flows is made using various computational schemes which solve the Euler and the Navier-Stokes equations of fluid mechanics. The flows examined are computed using several simple two-dimensional configurations including a backward facing step and a bump in a channel. Comparison of the results obtained using shock fitting and flux vector splitting methods are presented and the results obtained using the Euler codes are compared to results on the same configurations using a code which solves the Navier-Stokes equations.

  8. Validation of High-Speed Turbulent Boundary Layer and Shock-Boundary Layer Interaction Computations with the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Oliver, A. B.; Lillard, R. P.; Blaisdell, G. A.; Lyrintizis, A. S.

    2006-01-01

    The capability of the OVERFLOW code to accurately compute high-speed turbulent boundary layers and turbulent shock-boundary layer interactions is being evaluated. Configurations being investigated include a Mach 2.87 flat plate to compare experimental velocity profiles and boundary layer growth, a Mach 6 flat plate to compare experimental surface heat transfer,a direct numerical simulation (DNS) at Mach 2.25 for turbulent quantities, and several Mach 3 compression ramps to compare computations of shock-boundary layer interactions to experimental laser doppler velocimetry (LDV) data and hot-wire data. The present paper describes outlines the study and presents preliminary results for two of the flat plate cases and two small-angle compression corner test cases.

  9. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  10. Comparison of High-Fidelity Computational Tools for Wing Design of a Distributed Electric Propulsion Aircraft

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.

    2017-01-01

    A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.

  11. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Comparing student performance on paper- and computer-based math curriculum-based measures.

    PubMed

    Hensley, Kiersten; Rankin, Angelica; Hosp, John

    2017-01-01

    As the number of computerized curriculum-based measurement (CBM) tools increases, it is necessary to examine whether or not student performance can generalize across a variety of test administration modes (i.e., paper or computer). The purpose of this study is to compare math fact fluency on paper versus computer for 197 upper elementary students. Students completed identical sets of probes on paper and on the computer, which were then scored for digits correct, problems correct, and accuracy. Results showed a significant difference in performance between the two sets of probes, with higher fluency rates on the paper probes. Because decisions about levels of student support and interventions often rely on measures such as these, more research in this area is needed to examine the potential differences in student performance between paper-based and computer-based CBMs.

  13. Correlation of HIFiRE-5 Flight Data with Computed Pressure and Heat Transfer (Postprint)

    DTIC Science & Technology

    2015-06-01

    AFRL-RQ-WP-TP-2015-0149 CORRELATION OF HIFiRE-5 FLIGHT DATA WITH COMPUTED PRESSURE AND HEAT TRANSFER (POSTPRINT) Joseph S. Jewell...results with St was compared to flight heat transfer measurements, and transition locations were inferred. Finally, a computational heat conduction...HIFiRE-5 Flight Data With Computed Pressure and Heat Transfer Joseph S. Jewell,1 James H. Miller,2 and Roger L. Kimmel3 U.S. Air Force Research

  14. An Investigation of the Potential for a Computer-based Tutorial Program Covering the Cardiovascular System to Replace Traditional Lectures.

    ERIC Educational Resources Information Center

    Dewhurst, D. G.; Williams, A. D.

    1998-01-01

    Presents the results of a comparative study to evaluate the effectiveness of two interactive computer-based learning (CBL) programs, covering the cardiovascular system, as an alternative to lectures for first year undergraduate students at a United Kingdom University. Discusses results in relation to the design of evaluative studies and the future…

  15. The Relationship Between Computer Experience and Computerized Cognitive Test Performance Among Older Adults

    PubMed Central

    2013-01-01

    Objective. This study compared the relationship between computer experience and performance on computerized cognitive tests and a traditional paper-and-pencil cognitive test in a sample of older adults (N = 634). Method. Participants completed computer experience and computer attitudes questionnaires, three computerized cognitive tests (Useful Field of View (UFOV) Test, Road Sign Test, and Stroop task) and a paper-and-pencil cognitive measure (Trail Making Test). Multivariate analysis of covariance was used to examine differences in cognitive performance across the four measures between those with and without computer experience after adjusting for confounding variables. Results. Although computer experience had a significant main effect across all cognitive measures, the effect sizes were similar. After controlling for computer attitudes, the relationship between computer experience and UFOV was fully attenuated. Discussion. Findings suggest that computer experience is not uniquely related to performance on computerized cognitive measures compared with paper-and-pencil measures. Because the relationship between computer experience and UFOV was fully attenuated by computer attitudes, this may imply that motivational factors are more influential to UFOV performance than computer experience. Our findings support the hypothesis that computer use is related to cognitive performance, and this relationship is not stronger for computerized cognitive measures. Implications and directions for future research are provided. PMID:22929395

  16. FUN3D Analyses in Support of the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Heeg, Jennifer; Wieseman, Carol D.; Florance, Jennifer P.

    2013-01-01

    This paper presents the computational aeroelastic results generated in support of the first Aeroelastic Prediction Workshop for the Benchmark Supercritical Wing (BSCW) and the HIgh REynolds Number AeroStructural Dynamics (HIRENASD) configurations and compares them to the experimental data. The computational results are obtained using FUN3D, an unstructured grid Reynolds-averaged Navier-Stokes solver developed at NASA Langley Research Center. The analysis results for both configurations include aerodynamic coefficients and surface pressures obtained for steady-state or static aeroelastic equilibrium (BSCW and HIRENASD, respectively) and for unsteady flow due to a pitching wing (BSCW) or modally-excited wing (HIRENASD). Frequency response functions of the pressure coefficients with respect to displacement are computed and compared with the experimental data. For the BSCW, the shock location is computed aft of the experimentally-located shock position. The pressure distribution upstream of this shock is in excellent agreement with the experimental data, but the pressure downstream of the shock in the separated flow region does not match as well. For HIRENASD, very good agreement between the numerical results and the experimental data is observed at the mid-span wing locations.

  17. Computer versus paper--does it make any difference in test performance?

    PubMed

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.

  18. Does Medical Students' Preference of Test Format (Computer-based vs. Paper-based) have an Influence on Performance?

    PubMed Central

    2011-01-01

    Background Computer-based examinations (CBE) ensure higher efficiency with respect to producibility and assessment compared to paper-based examinations (PBE). However, students often have objections against CBE and are afraid of getting poorer results in a CBE. The aims of this study were (1) to assess the readiness and the objections of students to a CBE vs. PBE (2) to examine the acceptance and satisfaction with the CBE on a voluntary basis, and (3) to compare the results of the examinations, which were conducted in different formats. Methods Fifth year medical students were introduced to an examination-player and were free to choose their format for the test. The reason behind the choice of the format as well as the satisfaction with the choice was evaluated after the test with a questionnaire. Additionally, the expected and achieved examination results were measured. Results Out of 98 students, 36 voluntarily chose a CBE (37%), 62 students chose a PBE (63%). Both groups did not differ concerning sex, computer-experience, their achieved examination results of the test, and their satisfaction with the chosen format. Reasons for the students' objections against CBE include the possibility for outlines or written notices, a better overview, additional noise from the keyboard or missing habits normally present in a paper based exam. The students with the CBE tended to judge their examination to be more clear and understandable. Moreover, they saw their results to be independent of the format. Conclusions Voluntary computer-based examinations lead to equal test scores compared to a paper-based format. PMID:22026970

  19. CSI computer system/remote interface unit acceptance test results

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.

    1992-01-01

    The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.

  20. Vehicle - Bridge interaction, comparison of two computing models

    NASA Astrophysics Data System (ADS)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  1. Plasma cell quantification in bone marrow by computer-assisted image analysis.

    PubMed

    Went, P; Mayer, S; Oberholzer, M; Dirnhofer, S

    2006-09-01

    Minor and major criteria for the diagnosis of multiple meloma according to the definition of the WHO classification include different categories of the bone marrow plasma cell count: a shift from the 10-30% group to the > 30% group equals a shift from a minor to a major criterium, while the < 10% group does not contribute to the diagnosis. Plasma cell fraction in the bone marrow is therefore critical for the classification and optimal clinical management of patients with plasma cell dyscrasias. The aim of this study was (i) to establish a digital image analysis system able to quantify bone marrow plasma cells and (ii) to evaluate two quantification techniques in bone marrow trephines i.e. computer-assisted digital image analysis and conventional light-microscopic evaluation. The results were compared regarding inter-observer variation of the obtained results. Eighty-seven patients, 28 with multiple myeloma, 29 with monoclonal gammopathy of undetermined significance, and 30 with reactive plasmocytosis were included in the study. Plasma cells in H&E- and CD138-stained slides were quantified by two investigators using light-microscopic estimation and computer-assisted digital analysis. The sets of results were correlated with rank correlation coefficients. Patients were categorized according to WHO criteria addressing the plasma cell content of the bone marrow (group 1: 0-10%, group 2: 11-30%, group 3: > 30%), and the results compared by kappa statistics. The degree of agreement in CD138-stained slides was higher for results obtained using the computer-assisted image analysis system compared to light microscopic evaluation (corr.coeff. = 0.782), as was seen in the intra- (corr.coeff. = 0.960) and inter-individual results correlations (corr.coeff. = 0.899). Inter-observer agreement for categorized results (SM/PW: kappa 0.833) was in a high range. Computer-assisted image analysis demonstrated a higher reproducibility of bone marrow plasma cell quantification. This might be of critical importance for diagnosis, clinical management and prognostics when plasma cell numbers are low, which makes exact quantifications difficult.

  2. Turbulence Model Comparisons for Supersonic Transports at Transonic and Supersonic Conditions

    NASA Technical Reports Server (NTRS)

    Rivers, S. M. B.; Wahls, R. A.

    2003-01-01

    Results of turbulence model comparisons from two studies on supersonic transport configurations performed during the NASA High-speed Research program are given. Results are presented for both transonic conditions at Mach 0.90 and supersonic conditions at Mach 2.48. A feature of these two studies was the availability of higher Reynolds number wind tunnel data with which to compare the computational results. The transonic wind tunnel data was obtained in the National Transonic Facility at NASA Langley, and the supersonic data was obtained in the Boeing Polysonic Wind Tunnel. The computational data was acquired using a state of the art Navier-Stokes flow solver with a wide range of turbulence models implemented. The results show that the computed forces compare reasonably well with the experimental data, with the Baldwin- Lomax with Degani-Schiff modifications and the Baldwin-Barth models showing the best agreement for the transonic conditions and the Spalart-Allmaras model showing the best agreement for the supersonic conditions. The transonic results were more sensitive to the choice of turbulence model than were the supersonic results.

  3. Numerical computation of space shuttle orbiter flow field

    NASA Technical Reports Server (NTRS)

    Tannehill, John C.

    1988-01-01

    A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.

  4. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  5. Computational Predictions of the Performance Wright 'Bent End' Propellers

    NASA Technical Reports Server (NTRS)

    Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)

    2002-01-01

    Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.

  6. Comparison of RCS prediction techniques, computations and measurements

    NASA Astrophysics Data System (ADS)

    Brand, M. G. E.; Vanewijk, L. J.; Klinker, F.; Schippers, H.

    1992-07-01

    Three calculation methods to predict radar cross sections (RCS) of three dimensional objects are evaluated by computing the radar cross sections of a generic wing inlet configuration. The following methods are applied: a three dimensional high frequency method, a three dimensional boundary element method, and a two dimensional finite difference time domain method. The results of the computations are compared with the data of measurements.

  7. The Bond Dissociation Energies of 1-Butene

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R. (Technical Monitor)

    1994-01-01

    The bond dissociation energies of 1-butene and several calibration systems are computed using the G2(MP2) approach. The agreement between the calibration systems and experiment is very good. The computed values for 1-butene are compared with calibration systems and the agreement between the computed results for 1-butene and the "rule of thumb" values from the smaller systems is remarkably good.

  8. CBESW: sequence alignment on the Playstation 3.

    PubMed

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-09-17

    The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. For large datasets, our implementation on the PlayStation 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. The results from our experiments demonstrate that the PlayStation 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications.

  9. An experimental and computational investigation of flow in a radial inlet of an industrial pipeline centrifugal compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flathers, M.B.; Bache, G.E.; Rainsberger, R.

    1996-04-01

    The flow field of a complex three-dimensional radial inlet for an industrial pipeline centrifugal compressor has been experimentally determined on a half-scale model. Based on the experimental results, inlet guide vanes have been designed to correct pressure and swirl angle distribution deficiencies. The unvaned and vaned inlets are analyzed with a commercially available fully three-dimensional viscous Navier-Stokes code. Since experimental results were available prior to the numerical study, the unvaned analysis is considered a postdiction while the vaned analysis is considered a prediction. The computational results of the unvaned inlet have been compared to the previously obtained experimental results. Themore » experimental method utilized for the unvaned inlet is repeated for the vaned inlet and the data have been used to verify the computational results. The paper will discuss experimental, design, and computational procedures, grid generation, boundary conditions, and experimental versus computational methods. Agreement between experimental and computational results is very good, both in prediction and postdiction modes. The results of this investigation indicate that CFD offers a measurable advantage in design, schedule, and cost and can be applied to complex, three-dimensional radial inlets.« less

  10. Computational Investigation of Fluidic Counterflow Thrust Vectoring

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.; Deere, Karen A.

    1999-01-01

    A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.

  11. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Schaeffler, N. W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2007-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are summarized. Results in both cases are compared to experiment.

  12. Numerical computation of linear instability of detonations

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry; Kasimov, Aslan

    2017-11-01

    We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.

  13. On Computations of Duct Acoustics with Near Cut-Off Frequency

    NASA Technical Reports Server (NTRS)

    Dong, Thomas Z.; Povinelli, Louis A.

    1997-01-01

    The cut-off is a unique feature associated with duct acoustics due to the presence of duct walls. A study of this cut-off effect on the computations of duct acoustics is performed in the present work. The results show that the computation of duct acoustic modes near cut-off requires higher numerical resolutions than others to avoid being numerically cut off. Duct acoustic problems in Category 2 are solved by the DRP finite difference scheme with the selective artificial damping method and results are presented and compared to reference solutions.

  14. A comparison of two central difference schemes for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Maksymiuk, C. M.; Swanson, R. C.; Pulliam, T. H.

    1990-01-01

    Five viscous transonic airfoil cases were computed by two significantly different computational fluid dynamics codes: An explicit finite-volume algorithm with multigrid, and an implicit finite-difference approximate-factorization method with Eigenvector diagonalization. Both methods are described in detail, and their performance on the test cases is compared. The codes utilized the same grids, turbulence model, and computer to provide the truest test of the algorithms. The two approaches produce very similar results, which, for attached flows, also agree well with experimental results; however, the explicit code is considerably faster.

  15. Effects of computer monitor-emitted radiation on oxidant/antioxidant balance in cornea and lens from rats

    PubMed Central

    Namuslu, Mehmet; Devrim, Erdinç; Durak, İlker

    2009-01-01

    Purpose This study aims to investigate the possible effects of computer monitor-emitted radiation on the oxidant/antioxidant balance in corneal and lens tissues and to observe any protective effects of vitamin C (vit C). Methods Four groups (PC monitor, PC monitor plus vitamin C, vitamin C, and control) each consisting of ten Wistar rats were studied. The study lasted for three weeks. Vitamin C was administered in oral doses of 250 mg/kg/day. The computer and computer plus vitamin C groups were exposed to computer monitors while the other groups were not. Malondialdehyde (MDA) levels and superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), and catalase (CAT) activities were measured in corneal and lens tissues of the rats. Results In corneal tissue, MDA levels and CAT activity were found to increase in the computer group compared with the control group. In the computer plus vitamin C group, MDA level, SOD, and GSH-Px activities were higher and CAT activity lower than those in the computer and control groups. Regarding lens tissue, in the computer group, MDA levels and GSH-Px activity were found to increase, as compared to the control and computer plus vitamin C groups, and SOD activity was higher than that of the control group. In the computer plus vitamin C group, SOD activity was found to be higher and CAT activity to be lower than those in the control group. Conclusion The results of this study suggest that computer-monitor radiation leads to oxidative stress in the corneal and lens tissues, and that vitamin C may prevent oxidative effects in the lens. PMID:19960068

  16. Comparison of techniques for approximating ocean bottom topography in a wave-refraction computer model

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1975-01-01

    A study of the effects of using different methods for approximating bottom topography in a wave-refraction computer model was conducted. Approximation techniques involving quadratic least squares, cubic least squares, and constrained bicubic polynomial interpolation were compared for computed wave patterns and parameters in the region of Saco Bay, Maine. Although substantial local differences can be attributed to use of the different approximation techniques, results indicated that overall computed wave patterns and parameter distributions were quite similar.

  17. Identified state-space prediction model for aero-optical wavefronts

    NASA Astrophysics Data System (ADS)

    Faghihi, Azin; Tesch, Jonathan; Gibson, Steve

    2013-07-01

    A state-space disturbance model and associated prediction filter for aero-optical wavefronts are described. The model is computed by system identification from a sequence of wavefronts measured in an airborne laboratory. Estimates of the statistics and flow velocity of the wavefront data are shown and can be computed from the matrices in the state-space model without returning to the original data. Numerical results compare velocity values and power spectra computed from the identified state-space model with those computed from the aero-optical data.

  18. Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.

    PubMed

    Kiran Kumar, G R; Reddy, M Ramasubba

    2018-06-08

    Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.

  19. Employing OpenCL to Accelerate Ab Initio Calculations on Graphics Processing Units.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2017-06-13

    We present an extension of our graphics processing units (GPU)-accelerated quantum chemistry package to employ OpenCL compute kernels, which can be executed on a wide range of computing devices like CPUs, Intel Xeon Phi, and AMD GPUs. Here, we focus on the use of AMD GPUs and discuss differences as compared to CUDA-based calculations on NVIDIA GPUs. First illustrative timings are presented for hybrid density functional theory calculations using serial as well as parallel compute environments. The results show that AMD GPUs are as fast or faster than comparable NVIDIA GPUs and provide a viable alternative for quantum chemical applications.

  20. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  1. Combat Simulation Using Breach Computer Language

    DTIC Science & Technology

    1979-09-01

    simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model

  2. Hollow cathodes as electron emitting plasma contactors Theory and computer modeling

    NASA Technical Reports Server (NTRS)

    Davis, V. A.; Katz, I.; Mandell, M. J.; Parks, D. E.

    1987-01-01

    Several researchers have suggested using hollow cathodes as plasma contactors for electrodynamic tethers, particularly to prevent the Shuttle Orbiter from charging to large negative potentials. Previous studies have shown that fluid models with anomalous scattering can describe the electron transport in hollow cathode generated plasmas. An improved theory of the hollow cathode plasmas is developed and computational results using the theory are compared with laboratory experiments. Numerical predictions for a hollow cathode plasma source of the type considered for use on the Shuttle are presented, as are three-dimensional NASCAP/LEO calculations of the emitted ion trajectories and the resulting potentials in the vicinity of the Orbiter. The computer calculations show that the hollow cathode plasma source makes vastly superior contact with the ionospheric plasma compared with either an electron gun or passive ion collection by the Orbiter.

  3. Implementation and validation of a wake model for low-speed forward flight

    NASA Technical Reports Server (NTRS)

    Komerath, Narayanan M.; Schreiber, Olivier A.

    1987-01-01

    The computer implementation and calculations of the induced velocities produced by a wake model consisting of a trailing vortex system defined from a prescribed time averaged downwash distribution are detailed. Induced velocities are computed by approximating each spiral turn by a pair of large straight vortex segments positioned at critical points relative to where the induced velocity is required. A remainder term for the rest of the spiral is added. This approach results in decreased computation time compared to classical models where each spiral turn is broken down in small straight vortex segments. The model includes features such a harmonic variation of circulation, downwash outside of the blade and/or outside the tip path plane, blade bound vorticity induced velocity with harmonic variation of circulation and time averaging. The influence of various options and parameters on the results are investigated and results are compared to experimental field measurements with which, a resonable agreement is obtained. The capabilities of the model as well as its extension possibilities are studied. The performance of the model in predicting the recently-acquired NASA Langley Inflow data base for a four-bladed rotor is compared to that of the Scully Free Wake code, a well-established program which requires much greater computational resources. It is found that the two codes predict the experimental data with essentially the same accuracy, and show the same trends.

  4. A comparative study and validation of upwind and central-difference Navier-Stokes codes for high-speed flows

    NASA Technical Reports Server (NTRS)

    Rudy, David H.; Kumar, Ajay; Thomas, James L.; Gnoffo, Peter A.; Chakravarthy, Sukumar R.

    1988-01-01

    A comparative study was made using 4 different computer codes for solving the compressible Navier-Stokes equations. Three different test problems were used, each of which has features typical of high speed internal flow problems of practical importance in the design and analysis of propulsion systems for advanced hypersonic vehicles. These problems are the supersonic flow between two walls, one of which contains a 10 deg compression ramp, the flow through a hypersonic inlet, and the flow in a 3-D corner formed by the intersection of two symmetric wedges. Three of the computer codes use similar recently developed implicit upwind differencing technology, while the fourth uses a well established explicit method. The computed results were compared with experimental data where available.

  5. An Evaluation of Computer-Aided Instruction in an Introductory Biostatistics Course.

    ERIC Educational Resources Information Center

    Forsythe, Alan B.; Freed, James R.

    1979-01-01

    Evaluates the effectiveness of computer assisted instruction for teaching biostatistics to first year students at the UCLA School of Dentistry. Results do not demonstrate the superiority of CAI but do suggest that CAI compares favorably to conventional lecture and programed instruction methods. (RAO)

  6. NASA Aerodynamics Program Annual Report 1991

    DTIC Science & Technology

    1992-04-01

    results have been compared relation effort on an AH-1G Cobr : helicopter v- ,,odeI wind tunnel data at different has been completed. Computational...The computational studies cant discovery . Preliminary water-channel have shown the trapped vortex to be a viable and wind-tunnel tests have shown the

  7. Computer-Based Instruction in Dietetics Education.

    ERIC Educational Resources Information Center

    Schroeder, Lois; Kent, Phyllis

    1982-01-01

    Details the development and system design of a computer-based instruction (CBI) program designed to provide tutorial training in diet modification as part of renal therapy and provides the results of a study that compared the effectiveness of the CBI program with the traditional lecture/laboratory method. (EAO)

  8. Does medical students' preference of test format (computer-based vs. paper-based) have an influence on performance?

    PubMed

    Hochlehnert, Achim; Brass, Konstantin; Moeltner, Andreas; Juenger, Jana

    2011-10-25

    Computer-based examinations (CBE) ensure higher efficiency with respect to producibility and assessment compared to paper-based examinations (PBE). However, students often have objections against CBE and are afraid of getting poorer results in a CBE.The aims of this study were (1) to assess the readiness and the objections of students to a CBE vs. PBE (2) to examine the acceptance and satisfaction with the CBE on a voluntary basis, and (3) to compare the results of the examinations, which were conducted in different formats. Fifth year medical students were introduced to an examination-player and were free to choose their format for the test. The reason behind the choice of the format as well as the satisfaction with the choice was evaluated after the test with a questionnaire. Additionally, the expected and achieved examination results were measured. Out of 98 students, 36 voluntarily chose a CBE (37%), 62 students chose a PBE (63%). Both groups did not differ concerning sex, computer-experience, their achieved examination results of the test, and their satisfaction with the chosen format. Reasons for the students' objections against CBE include the possibility for outlines or written notices, a better overview, additional noise from the keyboard or missing habits normally present in a paper based exam. The students with the CBE tended to judge their examination to be more clear and understandable. Moreover, they saw their results to be independent of the format. Voluntary computer-based examinations lead to equal test scores compared to a paper-based format.

  9. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  10. Computed secondary-particle energy spectra following nonelastic neutron interactions with C-12 for E(n) between 15 and 60 MeV: Comparisons of results from two calculational methods

    NASA Astrophysics Data System (ADS)

    Dickens, J. K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d(sigma)/dE, following nonelastic neutron interactions with C-12 for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed.

  11. The identification of the variation of atherosclerosis plaques by invasive and non-invasive methods

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.

    1982-01-01

    Computer-enhanced visualization of coronary arteries and lesions within them is discussed, comparing invasive and noninvasive methods. Trial design factors in computer lesions assessment are briefly discussed, and the use of the computer edge-tracking technique in that assessment is described. The results of a small pilot study conducted on serial cineangiograms of men with premature atherosclerosis are presented. A canine study to determine the feasibility of quantifying atherosclerosis from intravenous carotid angiograms is discussed. Comparative error for arterial and venous injection in the canines is determined, and the mode of processing the films to achieve better visualization is described. The application of the computer edge-tracking technique to an ultrasound image of the human carotid artery is also shown and briefly discussed.

  12. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  13. Scapular notching in reverse shoulder arthroplasty: validation of a computer impingement model.

    PubMed

    Roche, Christopher P; Marczuk, Yann; Wright, Thomas W; Flurin, Pierre-Henri; Grey, Sean G; Jones, Richard B; Routman, Howard D; Gilot, Gregory J; Zuckerman, Joseph D

    2013-01-01

    The purpose of this study is to validate a reverse shoulder computer impingement model and quantify the impact of implant position on scapular impingement by comparing it to that of a radiographic analysis of 256 patients who received the same prosthesis and were followed postoperatively for an average of 22.2 months. A geometric computer analysis quantified anterior and posterior scapular impingement as the humerus was internally and externally rotated at varying levels of abduction and adduction relative to a fixed scapula at defined glenoid implant positions. These impingement results were compared to radiographic study of 256 patients who were analyzed for notching, glenoid baseplate position, and glenosphere overhang. The computer model predicted no impingement at 0° humeral abduction in the scapular plane for the 38 mm, 42 mm, and 46 mm devices when the glenoid baseplate cage peg is positioned 18.6 mm, 20.4 mm, and 22.7 mm from the inferior glenoid rim (of the reamed glenoid) or when glenosphere overhang of 4.6 mm, 4.7 mm, and 4.5 mm was obtained with each size glenosphere, respectively. When compared to the radiographic analysis, the computer model correctly predicted impingement based upon glenoid base- plate position in 18 of 26 patients with scapular notching and based upon glenosphere overhang in 15 of 26 patients with scapular notching. Reverse shoulder implant positioning plays an important role in scapular notching. The results of this study demonstrate that the computer impingement model can effectively predict impingement based upon implant positioning in a majority of patients who developed scapular notching clinically. This computer analysis provides guidance to surgeons on implant positions that reduce scapular notching, a well-documented complication of reverse shoulder arthroplasty.

  14. Software Design Strategies for Multidisciplinary Computational Fluid Dynamics

    DTIC Science & Technology

    2012-07-01

    on the left-hand-side of Figure 3. The resulting unstructured grid system does a good job of representing the flowfield locally around the solid... Laboratory [16–19]. It uses Cartesian block structured grids, which lead to a substantially more efficient computational execution compared to the...including blade sectional lift and pitching moment. These Helios-computed airloads show good agreement with the experimental data. Many of the

  15. Unbiased and automated identification of a circulating tumour cell definition that associates with overall survival.

    PubMed

    Ligthart, Sjoerd T; Coumans, Frank A W; Attard, Gerhardt; Cassidy, Amy Mulick; de Bono, Johann S; Terstappen, Leon W M M

    2011-01-01

    Circulating tumour cells (CTC) in patients with metastatic carcinomas are associated with poor survival and can be used to guide therapy. Classification of CTC however remains subjective, as they are morphologically heterogeneous. We acquired digital images, using the CellSearch™ system, from blood of 185 castration resistant prostate cancer (CRPC) patients and 68 healthy subjects to define CTC by computer algorithms. Patient survival data was used as the training parameter for the computer to define CTC. The computer-generated CTC definition was validated on a separate CRPC dataset comprising 100 patients. The optimal definition of the computer defined CTC (aCTC) was stricter as compared to the manual CellSearch CTC (mCTC) definition and as a consequence aCTC were less frequent. The computer-generated CTC definition resulted in hazard ratios (HRs) of 2.8 for baseline and 3.9 for follow-up samples, which is comparable to the mCTC definition (baseline HR 2.9, follow-up HR 4.5). Validation resulted in HRs at baseline/follow-up of 3.9/5.4 for computer and 4.8/5.8 for manual definitions. In conclusion, we have defined and validated CTC by clinical outcome using a perfectly reproducing automated algorithm.

  16. Health Literacy Assessment of the STOFHLA: Paper versus electronic administration continuation study.

    PubMed

    Chesser, Amy K; Keene Woods, Nikki; Wipperman, Jennifer; Wilson, Rachel; Dong, Frank

    2014-02-01

    Low health literacy is associated with poor health outcomes. Research is needed to understand the mechanisms and pathways of its effects. Computer-based assessment tools may improve efficiency and cost-effectiveness of health literacy research. The objective of this preliminary study was to assess if administration of the Short Test of Functional Health Literacy in Adults (STOFHLA) through a computer-based medium was comparable to the paper-based test in terms of accuracy and time to completion. A randomized, crossover design was used to compare computer versus paper format of the STOFHLA at a Midwestern family medicine residency program. Eighty participants were initially randomized to either computer (n = 42) or paper (n = 38) format of the STOFHLA. After a 30-day washout period, participants returned to complete the other version of the STOFHLA. Data analysis revealed no significant difference between paper- and computer-based surveys (p = .9401; N = 57). The majority of participants showed "adequate" health literacy via paper- and computer-based surveys (100% and 97% of participants, respectively). Electronic administration of STOFHLA results were equivalent to the paper administration results for evaluation of adult health literacy. Future investigations should focus on expanded populations in multiple health care settings and validation of other health literacy screening tools in a clinical setting.

  17. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  18. Wall functions for the kappa-epsilon turbulence model in generalized nonorthogonal curvilinear coordinates

    NASA Technical Reports Server (NTRS)

    Sondak, D. L.; Pletcher, R. H.; Vandalsem, W. R.

    1992-01-01

    A k-epsilon turbulence model suitable for compressible flow, including the new wall function formulation, has been incorporated into an existing compressible Reynolds-averaged Navier-Stokes code, F3D. The low Reynolds number k-epsilon model of Chien (1982) was added for comparison with the present method. A number of features were added to the F3D code including improved far-field boundary conditions and viscous terms in the streamwise direction. A series of computations of increasing complexity was run to test the effectiveness of the new formulation. Flow over a flat plate was computed by using both orthogonal and nonorthogonal grids, and the friction coefficients and velocity profiles compared with a semi-empirical equation. Flow over a body of revolution at zero angle of attack was then computed to test the method's ability to handle flow over a curved surface. Friction coefficients and velocity profiles were compared to test data. All models gave good results on a relatively fine grid, but only the wall function formulation was effective with coarser grids. Finally, in order to demonstrate the method's ability to handle complex flow fields, separated flow over a prolate spheroid at angle of attack was computed, and results were compared to test data. The results were also compared to a k-epsilon model by Kim and Patel (1991), in which one equation model patched in at the wall was employed. Both models gave reasonable solutions, but improvement is required for accurate prediction of friction coefficients in the separated regions.

  19. Electrode Position and Current Amplitude Modulate Impulsivity after Subthalamic Stimulation in Parkinsons Disease—A Computational Study

    PubMed Central

    Mandali, Alekhya; Chakravarthy, V. Srinivasa; Rajan, Roopa; Sarma, Sankara; Kishore, Asha

    2016-01-01

    Background: Subthalamic Nucleus Deep Brain Stimulation (STN-DBS) is highly effective in alleviating motor symptoms of Parkinson's disease (PD) which are not optimally controlled by dopamine replacement therapy. Clinical studies and reports suggest that STN-DBS may result in increased impulsivity and de novo impulse control disorders (ICD). Objective/Hypothesis: We aimed to compare performance on a decision making task, the Iowa Gambling Task (IGT), in healthy conditions (HC), untreated and medically-treated PD conditions with and without STN stimulation. We hypothesized that the position of electrode and stimulation current modulate impulsivity after STN-DBS. Methods: We built a computational spiking network model of basal ganglia (BG) and compared the model's STN output with STN activity in PD. Reinforcement learning methodology was applied to simulate IGT performance under various conditions of dopaminergic and STN stimulation where IGT total and bin scores were compared among various conditions. Results: The computational model reproduced neural activity observed in normal and PD conditions. Untreated and medically-treated PD conditions had lower total IGT scores (higher impulsivity) compared to HC (P < 0.0001). The electrode position that happens to selectively stimulate the part of the STN corresponding to an advantageous panel on IGT resulted in de-selection of that panel and worsening of performance (P < 0.0001). Supratherapeutic stimulation amplitudes also worsened IGT performance (P < 0.001). Conclusion(s): In our computational model, STN stimulation led to impulsive decision making in IGT in PD condition. Electrode position and stimulation current influenced impulsivity which may explain the variable effects of STN-DBS reported in patients. PMID:27965590

  20. Deformations of thick two-material cylinder under axially varying radial pressure

    NASA Technical Reports Server (NTRS)

    Patel, Y. A.

    1976-01-01

    Stresses and deformations in thick, short, composite cylinder subjected to axially varying radial pressure are studied. Effect of slippage at the interface is examined. In the NASTRAN finite element model, multipoint constraint feature is utilized. Results are compared with theoretical analysis and SAP-IV computer code. Results from NASTRAN computer code are in good agreement with the analytical solutions. Results suggest a considerable influence of interfacial slippage on the axial bending stresses in the cylinder.

  1. Active and Collaborative Learning in an Introductory Electrical and Computer Engineering Course

    ERIC Educational Resources Information Center

    Kotru, Sushma; Burkett, Susan L.; Jackson, David Jeff

    2010-01-01

    Active and collaborative learning instruments were introduced into an introductory electrical and computer engineering course. These instruments were designed to assess specific learning objectives and program outcomes. Results show that students developed an understanding comparable to that of more advanced students assessed later in the…

  2. Dissociation of the Ethyl Radical: An Exercise in Computational Chemistry

    ERIC Educational Resources Information Center

    Nassabeh, Nahal; Tran, Mark; Fleming, Patrick E.

    2014-01-01

    A set of exercises for use in a typical physical chemistry laboratory course are described, modeling the unimolecular dissociation of the ethyl radical to form ethylene and atomic hydrogen. Students analyze the computational results both qualitatively and quantitatively. Qualitative structural changes are compared to approximate predicted values…

  3. From evolutionary computation to the evolution of things.

    PubMed

    Eiben, Agoston E; Smith, Jim

    2015-05-28

    Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.

  4. Comparative analysis of numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.

    2017-07-01

    Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.

  5. Using 3D computer simulations to enhance ophthalmic training.

    PubMed

    Glittenberg, C; Binder, S

    2006-01-01

    To develop more effective methods of demonstrating and teaching complex topics in ophthalmology with the use of computer aided three-dimensional (3D) animation and interactive multimedia technologies. We created 3D animations and interactive computer programmes demonstrating the neuroophthalmological nature of the oculomotor system, including the anatomy, physiology and pathophysiology of the extra-ocular eye muscles and the oculomotor cranial nerves, as well as pupillary symptoms of neurological diseases. At the University of Vienna we compared their teaching effectiveness to conventional teaching methods in a comparative study involving 100 medical students, a multiple choice exam and a survey. The comparative study showed that our students achieved significantly better test results (80%) than the control group (63%) (diff. = 17 +/- 5%, p = 0.004). The survey showed a positive reaction to the software and a strong preference to have more subjects and techniques demonstrated in this fashion. Three-dimensional computer animation technology can significantly increase the quality and efficiency of the education and demonstration of complex topics in ophthalmology.

  6. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  7. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    PubMed

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  8. CMG-Biotools, a Free Workbench for Basic Comparative Microbial Genomics

    PubMed Central

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Background Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. Results The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. Conclusion This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training. PMID:23577086

  9. Computational Control Workstation: Users' perspectives

    NASA Technical Reports Server (NTRS)

    Roithmayr, Carlos M.; Straube, Timothy M.; Tave, Jeffrey S.

    1993-01-01

    A Workstation has been designed and constructed for rapidly simulating motions of rigid and elastic multibody systems. We examine the Workstation from the point of view of analysts who use the machine in an industrial setting. Two aspects of the device distinguish it from other simulation programs. First, one uses a series of windows and menus on a computer terminal, together with a keyboard and mouse, to provide a mathematical and geometrical description of the system under consideration. The second hallmark is a facility for animating simulation results. An assessment of the amount of effort required to numerically describe a system to the Workstation is made by comparing the process to that used with other multibody software. The apparatus for displaying results as a motion picture is critiqued as well. In an effort to establish confidence in the algorithms that derive, encode, and solve equations of motion, simulation results from the Workstation are compared to answers obtained with other multibody programs. Our study includes measurements of computational speed.

  10. Simulations and Measurements of Human Middle Ear Vibrations Using Multi-Body Systems and Laser-Doppler Vibrometry with the Floating Mass Transducer.

    PubMed

    Böhnke, Frank; Bretan, Theodor; Lehner, Stefan; Strenger, Tobias

    2013-10-22

    The transfer characteristic of the human middle ear with an applied middle ear implant (floating mass transducer) is examined computationally with a Multi-body System approach and compared with experimental results. For this purpose, the geometry of the middle ear was reconstructed from μ-computer tomography slice data and prepared for a Multi-body System simulation. The transfer function of the floating mass transducer, which is the ratio of the input voltage and the generated force, is derived based on a physical context. The numerical results obtained with the Multi-body System approach are compared with experimental results by Laser Doppler measurements of the stapes footplate velocities of five different specimens. Although slightly differing anatomical structures were used for the calculation and the measurement, a high correspondence with respect to the course of stapes footplate displacement along the frequency was found. Notably, a notch at frequencies just below 1 kHz occurred. Additionally, phase courses of stapes footplate displacements were determined computationally if possible and compared with experimental results. The examinations were undertaken to quantify stapes footplate displacements in the clinical practice of middle ear implants and, also, to develop fitting strategies on a physical basis for hearing impaired patients aided with middle ear implants.

  11. Experimental, Theoretical, and Computational Investigation of Separated Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.

    2004-01-01

    A detailed experimental, theoretical, and computational study of separated nozzle flows has been conducted. Experimental testing was performed at the NASA Langley 16-Foot Transonic Tunnel Complex. As part of a comprehensive static performance investigation, force, moment, and pressure measurements were made and schlieren flow visualization was obtained for a sub-scale, non-axisymmetric, two-dimensional, convergent- divergent nozzle. In addition, two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and algebraic Reynolds stress modeling. For reference, experimental and computational results were compared with theoretical predictions based on one-dimensional gas dynamics and an approximate integral momentum boundary layer method. Experimental results from this study indicate that off-design overexpanded nozzle flow was dominated by shock induced boundary layer separation, which was divided into two distinct flow regimes; three- dimensional separation with partial reattachment, and fully detached two-dimensional separation. The test nozzle was observed to go through a marked transition in passing from one regime to the other. In all cases, separation provided a significant increase in static thrust efficiency compared to the ideal prediction. Results indicate that with controlled separation, the entire overexpanded range of nozzle performance would be within 10% of the peak thrust efficiency. By offering savings in weight and complexity over a conventional mechanical exhaust system, this may allow a fixed geometry nozzle to cover an entire flight envelope. The computational simulation was in excellent agreement with experimental data over most of the test range, and did a good job of modeling internal flow and thrust performance. An exception occurred at low nozzle pressure ratios, where the two-dimensional computational model was inconsistent with the three-dimensional separation observed in the experiment. In general, the computation captured the physics of the shock boundary layer interaction and shock induced boundary layer separation in the nozzle, though there were some differences in shock structure compared to experiment. Though minor, these differences could be important for studies involving flow control or thrust vectoring of separated nozzles. Combined with other observations, this indicates that more detailed, three-dimensional computational modeling needs to be conducted to more realistically simulate shock-separated nozzle flows.

  12. Quantum computing on encrypted data

    NASA Astrophysics Data System (ADS)

    Fisher, K. A. G.; Broadbent, A.; Shalm, L. K.; Yan, Z.; Lavoie, J.; Prevedel, R.; Jennewein, T.; Resch, K. J.

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  13. Quantum computing on encrypted data.

    PubMed

    Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  14. Wheat productivity estimates using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Colwell, J. E. (Principal Investigator); Rice, D. P.; Bresnahan, P. A.

    1977-01-01

    The author has identified the following significant results. Large area LANDSAT yield estimates were generated. These results were compared with estimates computed using a meteorological yield model (CCEA). Both of these estimates were compared with Kansas Crop and Livestock Reporting Service (KCLRS) estimates of yield, in an attempt to assess the relative and absolute accuracy of the LANDSAT and CCEA estimates. Results were inconclusive. A large area direct wheat prediction procedure was implemented. Initial results have produced a wheat production estimate comparable with the KCLRS estimate.

  15. Comparative study of cranial anthropometric measurement by traditional calipers to computed tomography and three-dimensional photogrammetry.

    PubMed

    Mendonca, Derick A; Naidoo, Sybill D; Skolnick, Gary; Skladman, Rachel; Woo, Albert S

    2013-07-01

    Craniofacial anthropometry by direct caliper measurements is a common method of quantifying the morphology of the cranial vault. New digital imaging modalities including computed tomography and three-dimensional photogrammetry are similarly being used to obtain craniofacial surface measurements. This study sought to compare the accuracy of anthropometric measurements obtained by calipers versus 2 methods of digital imaging.Standard anterior-posterior, biparietal, and cranial index measurements were directly obtained on 19 participants with an age range of 1 to 20 months. Computed tomographic scans and three-dimensional photographs were both obtained on each child within 2 weeks of the clinical examination. Two analysts measured the anterior-posterior and biparietal distances on the digital images. Measures of reliability and bias between the modalities were calculated and compared.Caliper measurements were found to underestimate the anterior-posterior and biparietal distances as compared with those of the computed tomography and the three-dimensional photogrammetry (P < 0.001). Cranial index measurements between the computed tomography and the calipers differed by up to 6%. The difference between the 2 modalities was statistically significant (P = 0.021). The biparietal and cranial index results were similar between the digital modalities, but the anterior-posterior measurement was greater with the three-dimensional photogrammetry (P = 0.002). The coefficients of variation for repeated measures based on the computed tomography and the three-dimensional photogrammetry were 0.008 and 0.007, respectively.In conclusion, measurements based on digital modalities are generally reliable and interchangeable. Caliper measurements lead to underestimation of anterior-posterior and biparietal values compared with digital imaging.

  16. Computational Modeling of Electrochemical-Poroelastic Bending Behaviors of Conducting Polymer (PPy) Membranes

    NASA Astrophysics Data System (ADS)

    Toi, Yutaka; Jung, Woosang

    The electrochemical-poroelastic bending behavior of conducting polymer actuators has an attractive feature, considering their potential applications such as artificial muscles or MEMS. In the present study, a computational modeling is presented for the bending behavior of polypyrrole-based actuators. The one-dimensional governing equation for the ionic transportation in electrolytes given by Tadokoro et al. is combined with the finite element modeling for the poroelastic behavior of polypyrroles considering the effect of finite deformation. The validity of the proposed model has been illustrated by comparing the computed results with the experimental results in the literatures.

  17. Simulation of Fatigue Behavior of High Temperature Metal Matrix Composites

    NASA Technical Reports Server (NTRS)

    Tong, Mike T.; Singhal, Suren N.; Chamis, Christos C.; Murthy, Pappu L. N.

    1996-01-01

    A generalized relatively new approach is described for the computational simulation of fatigue behavior of high temperature metal matrix composites (HT-MMCs). This theory is embedded in a specialty-purpose computer code. The effectiveness of the computer code to predict the fatigue behavior of HT-MMCs is demonstrated by applying it to a silicon-fiber/titanium-matrix HT-MMC. Comparative results are shown for mechanical fatigue, thermal fatigue, thermomechanical (in-phase and out-of-phase) fatigue, as well as the effects of oxidizing environments on fatigue life. These results show that the new approach reproduces available experimental data remarkably well.

  18. Evaluation of Root Canal Preparation Using Rotary System and Hand Instruments Assessed by Micro-Computed Tomography

    PubMed Central

    Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond

    2015-01-01

    Background Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Material/Methods Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. Results ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Conclusions Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal. PMID:26092929

  19. Computer-aided analysis for the Mechanics of Granular Materials (MGM) experiment, part 2

    NASA Technical Reports Server (NTRS)

    Parker, Joey K.

    1987-01-01

    Computer vision based analysis for the MGM experiment is continued and expanded into new areas. Volumetric strains of granular material triaxial test specimens have been measured from digitized images. A computer-assisted procedure is used to identify the edges of the specimen, and the edges are used in a 3-D model to estimate specimen volume. The results of this technique compare favorably to conventional measurements. A simplified model of the magnification caused by diffraction of light within the water of the test apparatus was also developed. This model yields good results when the distance between the camera and the test specimen is large compared to the specimen height. An algorithm for a more accurate 3-D magnification correction is also presented. The use of composite and RGB (red-green-blue) color cameras is discussed and potentially significant benefits from using an RGB camera are presented.

  20. Near-realtime simulations of biolelectric activity in small mammalian hearts using graphical processing units

    PubMed Central

    Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot

    2014-01-01

    Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295

  1. CFD Modeling of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.

    2001-01-01

    NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.

  2. Scattering of sound waves by a compressible vortex

    NASA Technical Reports Server (NTRS)

    Colonius, Tim; Lele, Sanjiva K.; Moin, Parviz

    1991-01-01

    Scattering of plane sound waves by a compressible vortex is investigated by direct computation of the two-dimensional Navier-Stokes equations. Nonreflecting boundary conditions are utilized, and their accuracy is established by comparing results on different sized domains. Scattered waves are directly measured from the computations. The resulting amplitude and directivity pattern of the scattered waves is discussed, and compared to various theoretical predictions. For compact vortices (zero circulation), the scattered waves directly computed are in good agreement with predictions based on an acoustic analogy. Strong scattering at about + or - 30 degrees from the direction of incident wave propagation is observed. Back scattering is an order of magnitude smaller than forward scattering. For vortices with finite circulation refraction of the sound by the mean flow field outside the vortex core is found to be important in determining the amplitude and directivity of the scattered wave field.

  3. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  4. Turbulent heat transfer performance of single stage turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amano, R.S.; Song, B.

    1999-07-01

    To increase the efficiency and the power of modern power plant gas turbines, designers are continually trying to raise the maximum turbine inlet temperature. Here, a numerical study based on the Navier-Stokes equations on a three-dimensional turbulent flow in a single stage turbine stator/rotor passage has been conducted and reported in this paper. The full Reynolds-stress closure model (RSM) was used for the computations and the results were also compared with the computations made by using the Launder-Sharma low-Reynolds-number {kappa}-{epsilon} model. The computational results obtained using these models were compared in order to investigate the turbulence effect in the near-wallmore » region. The set of the governing equations in a generalized curvilinear coordinate system was discretized by using the finite volume method with non-staggered grids. The numerical modeling was performed to interact between the stator and rotor blades.« less

  5. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, M.S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device. 27 figs.

  6. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  7. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  8. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    2003-08-19

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  9. Stresses in acoustically excited panels and shuttle insulation tiles

    NASA Technical Reports Server (NTRS)

    Otalvo, I. U.

    1976-01-01

    Natural vibration and acoustic response results are presented for a 36 x 18 inch panel with 18 6 x 6-inch tiles of 1.0, 1.6 and 2.3 inch thicknesses. Computed results for an untiled panel are compared with experiments performed earlier. Natural frequency and acoustic response comparisons are also given for independent analyses performed upon tiled and untiled panels. The results indicate the general applicability of the computer programs developed for use as shuttle design and analysis tools.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyne, G.; Magnin, H.; Berliat, G.

    A supervisor has been developed so as to allow successive 3D computations of different quantities by different softwares on the same physical problem. Noise of a given power oil transformer can be deduced from the surface vibrations of the tank. These vibrations are obtained through a mechanic computation whose Inputs are the electromagnetic forces provided . by an electromagnetic computation. Magnetic, mechanic and acoustic experimental data are compared with the results of the 3D computations. Stress Is put on the main characteristics of the supervisor such as the transfer of a given quantity from one mesh to the other.

  11. Parametric instabilities of rotor-support systems with application to industrial ventilators

    NASA Technical Reports Server (NTRS)

    Parszewski, Z.; Krodkiemski, T.; Marynowski, K.

    1980-01-01

    Rotor support systems interaction with parametric excitation is considered for both unequal principal shaft stiffness (generators) and offset disc rotors (ventilators). Instability regions and types of instability are computed in the first case, and parametric resonances in the second case. Computed and experimental results are compared for laboratory machine models. A field case study of parametric vibrations in industrial ventilators is reported. Computed parametric resonances are confirmed in field measurements, and some industrial failures are explained. Also the dynamic influence and gyroscopic effect of supporting structures are shown and computed.

  12. Split-mouth comparison of the accuracy of computer-generated and conventional surgical guides.

    PubMed

    Farley, Nathaniel E; Kennedy, Kelly; McGlumphy, Edwin A; Clelland, Nancy L

    2013-01-01

    Recent clinical studies have shown that implant placement is highly predictable with computer-generated surgical guides; however, the reliability of these guides has not been compared to that of conventional guides clinically. This study aimed to compare the accuracy of reproducing planned implant positions with computer-generated and conventional surgical guides using a split-mouth design. Ten patients received two implants each in symmetric locations. All implants were planned virtually using a software program and information from cone beam computed tomographic scans taken with scan appliances in place. Patients were randomly selected for computer-aided design/computer-assisted manufacture (CAD/CAM)-guided implant placement on their right or left side. Conventional guides were used on the contralateral side. Patients underwent operative cone beam computed tomography postoperatively. Planned and actual implant positions were compared using three-dimensional analyses capable of measuring volume overlap as well as differences in angles and coronal and apical positions. Results were compared using a mixed-model repeated-measures analysis of variance and were further analyzed using a Bartlett test for unequal variance (α = .05). Implants placed with CAD/CAM guides were closer to the planned positions in all eight categories examined. However, statistically significant differences were shown only for coronal horizontal distances. It was also shown that CAD/CAM guides had less variability than conventional guides, which was statistically significant for apical distance. Implants placed using CAD/CAM surgical guides provided greater accuracy in a lateral direction than conventional guides. In addition, CAD/CAM guides were more consistent in their deviation from the planned locations than conventional guides.

  13. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  14. Computer Vision Photogrammetry for Underwater Archaeological Site Recording in a Low-Visibility Environment

    NASA Astrophysics Data System (ADS)

    Van Damme, T.

    2015-04-01

    Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.

  15. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images

    NASA Astrophysics Data System (ADS)

    Erdt, Marius; Sakas, Georgios

    2010-03-01

    This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.

  16. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  17. Correlation between Preoperative High Resolution Computed Tomography (CT) Findings with Surgical Findings in Chronic Otitis Media (COM) Squamosal Type.

    PubMed

    Karki, S; Pokharel, M; Suwal, S; Poudel, R

    Background The exact role of High resolution computed tomography (HRCT) temporal bone in preoperative assessment of Chronic suppurative otitis media atticoantral disease still remains controversial. Objective To evaluate the role of high resolution computed tomography temporal bone in Chronic suppurative otitis media atticoantral disease and to compare preoperative computed tomographic findings with intra-operative findings. Method Prospective, analytical study conducted among 65 patients with chronic suppurative otitis media atticoantral disease in Department of Radiodiagnosis, Kathmandu University Dhulikhel Hospital between January 2015 to July 2016. The operative findings were compared with results of imaging. The parameters of comparison were erosion of ossicles, scutum, facial canal, lateral semicircular canal, sigmoid and tegmen plate along with extension of disease to sinus tympani and facial recess. Sensitivity, specificity, negative predictive value, positive predictive values were calculated. Result High resolution computed tomography temporal bone offered sensitivity (Se) and specificity (Sp) of 100% for visualization of sigmoid and tegmen plate erosion. The performance of HRCT in detecting malleus (Se=100%, Sp=95.23%), incus (Se=100%,Sp=80.48%) and stapes (Se=96.55%, Sp=71.42%) erosion was excellent. It offered precise information about facial canal erosion (Se=100%, Sp=75%), scutum erosion (Se=100%, Sp=96.87%) and extension of disease to facial recess and sinus tympani (Se=83.33%,Sp=100%). high resolution computed tomography showed specificity of 100% for lateral semicircular canal erosion (Sp=100%) but with low sensitivity (Se=53.84%). Conclusion The findings of high resolution computed tomography and intra-operative findings were well comparable except for lateral semicircular canal erosion. high resolution computed tomography temporal bone acts as a road map for surgeon to identify the extent of disease, plan for appropriate procedure that is required and prepare for potential complications that can be encountered during surgery.

  18. The Mental Health Impact of Computer and Internet Training on a Multi-ethnic Sample of Community-Dwelling Older Adults: Results of a Pilot Randomised Controlled Trial

    PubMed Central

    Laganá, Luciana; García, James J.

    2013-01-01

    Introduction: We preliminarily explored the effects of computer and internet training in older age and attempted to address the diversity gap in the ethnogeriatric literature, given that, in our study’s sample, only one-third of the participants self-identified as White. The aim of this investigation was to compare two groups - the control and the experimental conditions - regarding theme 1) computer attitudes and related self-efficacy, and theme 2) self-esteem and depressive symptomatology. Methods: Sixty non-institutionalized residents of Los Angeles County (mean age ± SD: 69.12 ± 10.37 years; age range: 51-92) were randomly assigned to either the experimental group (n=30) or the waitlist/control group (n=30). The experimental group was involved in 6 weeks of one-on-one computer and internet training for one 2-hour session per week. The same training was administered to the control participants after their post-test. Outcome measures included the four variables, organized into the two aforementioned themes. Results: There were no significant between-group differences in either post-test computer attitudes or self-esteem. However, findings revealed that the experimental group reported greater computer self-efficacy, compared to the waitlist/control group, at post-test/follow-up [F(1,56)=28.89, p=0.001, η2=0.01]. Additionally, at the end of the computer and internet training, there was a substantial and statistically significant decrease in depression scores among those in the experimental group when compared to the waitlist/control group [F(1,55)=9.06, p<0.004, η2=0.02]. Conclusions: There were significant improvements in favour of the experimental group in computer self-efficacy and, of noteworthy clinical relevance, in depression, as evidenced by a decreased percentage of significantly depressed experimental subjects from 36.7% at baseline to 16.7% at the end of our intervention. PMID:24151452

  19. Effect of Gender on Computer Use and Attitudes of College Seniors

    NASA Astrophysics Data System (ADS)

    McCoy, Leah P.; Heafner, Tina L.

    Male and female students have historically had different computer attitudes and levels of computer use. These equity issues are of interest to researchers and practitioners who seek to understand why a digital divide exists between men and women. In this study, these questions were examined in an intensive computing environment in which all students at one university were issued identical laptop computers and used them extensively for 4 years. Self-reported computer use was examined for effects of gender. Attitudes toward computers were also assessed and compared for male and female students. The results indicated that when the technological environment was institutionally equalized for male and female students, many traditional findings of gender differences were not evident.

  20. Electroosmotic flow in a rectangular channel with variable wall zeta-potential: comparison of numerical simulation with asymptotic theory.

    PubMed

    Datta, Subhra; Ghosal, Sandip; Patankar, Neelesh A

    2006-02-01

    Electroosmotic flow in a straight micro-channel of rectangular cross-section is computed numerically for several situations where the wall zeta-potential is not constant but has a specified spatial variation. The results of the computation are compared with an earlier published asymptotic theory based on the lubrication approximation: the assumption that any axial variations take place on a long length scale compared to a characteristic channel width. The computational results are found to be in excellent agreement with the theory even when the scale of axial variations is comparable to the channel width. In the opposite limit when the wavelength of fluctuations is much shorter than the channel width, the lubrication theory fails to describe the solution either qualitatively or quantitatively. In this short wave limit the solution is well described by Ajdari's theory for electroosmotic flow between infinite parallel plates (Ajdari, A., Phys. Rev. E 1996, 53, 4996-5005.) The infinitely thin electric double layer limit is assumed in the theory as well as in the simulation.

  1. Dayglow and night glow of the Venusian upper atmosphere. Modelling and observations

    NASA Astrophysics Data System (ADS)

    Gronoff, G.; Lilensten, J.; Simon, C.; Barthélemy, M.; Leblanc, F.

    2007-08-01

    Aims. We present the modelling of the production of excited states of O, CO and N2 in the Venusian upper atmosphere, which allows to compute the nightglow emissions. In the dayside, we also compute several emissions, taking advantage of the small influence of resonant scattering for forbidden transitions. Methods. We compute the photoionisation and the photodissociation mechanisms, and thus the photoelectron production. We compute electron impact excitation and ionisation through a multi-stream stationary kinetic transport code. Finally, we compute the ion recombination with a stationary chemical model. Results.We predict altitude density profiles for O(1S) and O(1D) states and the emissions corresponding to their different transitions. They are found to be very comparable to the observations without the need for stratospheric emissions. In the nightside, we highlight the role of the N + O+2 reaction in the creation of the O(1S) state. This reaction has been suggested by Rees in 1975 (Frederick, 1976). It has been discussed several times afterwhile and in spite of different studies, is still controversial. However, when we take it in consideration in Venus, it is shown to be the cause of almost 90% of the state production. We calculate the production intensities of O(3S) and O(5S) states, which are needed for radiative transfer models. For CO we compute the Cameron band and the fourth positive band emissions. For N2 we compute the LBH, first and second positive bands. All these values are successfully compared to the experiment when data are available. Conclusions. For the first time, a comprehensive model is proposed to compute dayglow and nightglow emissions of the Venusian upper atmosphere. It relies on previous works with noticeable improvements, both on the transport and on the chemical aspects. In the near future, a radiative transfer model will be used to compute optically thick lines in the dayglow, and a fluid model will be added to compute ionosphere densities and temperatures. We will present the first observational results from the Pic du Midi telescope in June 2007, in order to compare with our modelling.

  2. Solution of steady and unsteady transonic-vortex flows using Euler and full-potential equations

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.; Chuang, Andrew H.; Hu, Hong

    1989-01-01

    Two methods are presented for inviscid transonic flows: unsteady Euler equations in a rotating frame of reference for transonic-vortex flows and integral solution of full-potential equation with and without embedded Euler domains for transonic airfoil flows. The computational results covered: steady and unsteady conical vortex flows; 3-D steady transonic vortex flow; and transonic airfoil flows. The results are in good agreement with other computational results and experimental data. The rotating frame of reference solution is potentially efficient as compared with the space fixed reference formulation with dynamic gridding. The integral equation solution with embedded Euler domain is computationally efficient and as accurate as the Euler equations.

  3. Control mechanism of double-rotator-structure ternary optical computer

    NASA Astrophysics Data System (ADS)

    Kai, SONG; Liping, YAN

    2017-03-01

    Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.

  4. Mutual coupling effects in antenna arrays, volume 1

    NASA Technical Reports Server (NTRS)

    Collin, R. E.

    1986-01-01

    Mutual coupling between rectangular apertures in a finite antenna array, in an infinite ground plane, is analyzed using the vector potential approach. The method of moments is used to solve the equations that result from setting the tangential magnetic fields across each aperture equal. The approximation uses a set of vector potential model functions to solve for equivalent magnetic currents. A computer program was written to carry out this analysis and the resulting currents were used to determine the co- and cross-polarized far zone radiation patterns. Numerical results for various arrays using several modes in the approximation are presented. Results for one and two aperture arrays are compared against published data to check on the agreement of this model with previous work. Computer derived results are also compared against experimental results to test the accuracy of the model. These tests of the accuracy of the program showed that it yields valid data.

  5. Portraits of PBL: Course Objectives and Students' Study Strategies in Computer Engineering, Psychology and Physiotherapy.

    ERIC Educational Resources Information Center

    Dahlgren, Madeleine Abrandt

    2000-01-01

    Compares the role of course objectives in relation to students' study strategies in problem-based learning (PBL). Results comprise data from three PBL programs at Linkopings University (Sweden), in physiotherapy, psychology, and computer engineering. Faculty provided course objectives to function as supportive structures and guides for students'…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hang Bae

    A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.

  7. Computer Use of a Medical Dictionary to Select Search Words.

    ERIC Educational Resources Information Center

    O'Connor, John

    1986-01-01

    Explains an experiment in text-searching retrieval for cancer questions which developed and used computer procedures (via human simulation) to select search words from medical dictionaries. This study is based on an earlier one in which search words were humanly selected, and the recall results of the two studies are compared. (Author/LRW)

  8. CCSSM Challenge: Graphing Ratio and Proportion

    ERIC Educational Resources Information Center

    Kastberg, Signe E.; D'Ambrosio, Beatriz S.; Lynch-Davis, Kathleen; Mintos, Alexia; Krawczyk, Kathryn

    2013-01-01

    A renewed emphasis was placed on ratio and proportional reasoning in the middle grades in the Common Core State Standards for Mathematics (CCSSM). The expectation for students includes the ability to not only compute and then compare and interpret the results of computations in context but also interpret ratios and proportions as they are…

  9. Cognitive Asymmetry, Computer Science Students, and Professional Programmers.

    ERIC Educational Resources Information Center

    Gordon, Harold W.

    1990-01-01

    Discussion of right brain versus left brain skills focuses on a study that compared the performances of computer science students, professional programers, and bank employees on eight tests of brain function. Results are reported which suggest that the cognitive profile may be an important indicator for success in certain occupations. (16…

  10. Analysis of Computer Algebra System Tutorials Using Cognitive Load Theory

    ERIC Educational Resources Information Center

    May, Patricia

    2004-01-01

    Most research in the area of Computer Algebra Systems (CAS) has been designed to compare the effectiveness of instructional technology to traditional lecture-based formats. While results are promising, research also indicates evidence of the steep learning curve imposed by the technology. Yet no studies have been conducted to investigate this…

  11. Computer-Assisted Instruction and Continuing Motivation.

    ERIC Educational Resources Information Center

    Mosley, Mary Lou; And Others

    Effects of two feedback conditions--comment and no comment--on the motivation of sixth grade students to continue with computer assisted instruction (CAI) were investigated, and results for boys and for girls were compared. Subjects were 62 students--29 boys and 33 girls--from a suburban elementary school who were randomly assigned to the comment…

  12. Results of a feasibility study using the Newton-Raphson digital computer program to identify lifting body derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Sim, A. G.

    1973-01-01

    A brief study was made to assess the applicability of the Newton-Raphson digital computer program as a routine technique for extracting aerodynamic derivatives from flight tests of lifting body types of vehicles. Lateral-direction flight data from flight tests of the HL-10 lifting body reserch vehicle were utilized. The results in general, show the computer program to be a reliable and expedient means for extracting derivatives for this class of vehicles as a standard procedure. This result was true even when stability augmentation was used. As a result of the study, a credible set of HL-10 lateral-directional derivatives was obtained from flight data. These derivatives are compared with results from wind-tunnel tests.

  13. Extension of a nonlinear systems theory to general-frequency unsteady transonic aerodynamic responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1993-01-01

    A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.

  14. Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-04-01

    The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and themore » Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.« less

  15. Using three-dimensional computational modeling to compare the geometrical fitness of two kinds of proximal femoral intramedullary nail for Chinese femur.

    PubMed

    Zhang, Sheng; Zhang, Kairui; Wang, Yimin; Feng, Wei; Wang, Bowei; Yu, Bin

    2013-01-01

    The aim of this study was to use three-dimensional (3D) computational modeling to compare the geometric fitness of these two kinds of proximal femoral intramedullary nails in the Chinese femurs. Computed tomography (CT) scans of a total of 120 normal adult Chinese cadaveric femurs were collected for analysis. With the three-dimensional (3D) computational technology, the anatomical fitness between the nail and bone was quantified according to the impingement incidence, maximum thicknesses and lengths by which the nail was protruding into the cortex in the virtual bone model, respectively, at the proximal, middle, and distal portions of the implant in the femur. The results showed that PFNA-II may fit better for the Chinese proximal femurs than InterTan, and the distal portion of InterTan may perform better than that of PFNA-II; the anatomic fitness of both nails for Chinese patients may not be very satisfactory. As a result, both implants need further modifications to meet the needs of the Chinese population.

  16. Thin-layer and full Navier-Stokes calculations for turbulent supersonic flow over a cone at an angle of attack

    NASA Technical Reports Server (NTRS)

    Smith, Crawford F.; Podleski, Steve D.

    1993-01-01

    The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.

  17. Medical education as a science: the quality of evidence for computer-assisted instruction.

    PubMed

    Letterie, Gerard S

    2003-03-01

    A marked increase in the number of computer programs for computer-assisted instruction in the medical sciences has occurred over the past 10 years. The quality of both the programs and the literature that describe these programs has varied considerably. The purposes of this study were to evaluate the published literature that described computer-assisted instruction in medical education and to assess the quality of evidence for its implementation, with particular emphasis on obstetrics and gynecology. Reports published between 1988 and 2000 on computer-assisted instruction in medical education were identified through a search of MEDLINE and Educational Resource Identification Center and a review of the bibliographies of the articles that were identified. Studies were selected if they included a description of computer-assisted instruction in medical education, regardless of the type of computer program. Data were extracted with a content analysis of 210 reports. The reports were categorized according to study design (comparative, prospective, descriptive, review, or editorial), type of computer-assisted instruction, medical specialty, and measures of effectiveness. Computer-assisted instruction programs included online technologies, CD-ROMs, video laser disks, multimedia work stations, virtual reality, and simulation testing. Studies were identified in all medical specialties, with a preponderance in internal medicine, general surgery, radiology, obstetrics and gynecology, pediatrics, and pathology. Ninety-six percent of the articles described a favorable impact of computer-assisted instruction in medical education, regardless of the quality of the evidence. Of the 210 reports that were identified, 60% were noncomparative, descriptive reports of new techniques in computer-assisted instruction, and 15% and 14% were reviews and editorials, respectively, of existing technology. Eleven percent of studies were comparative and included some form of assessment of the effectiveness of the computer program. These assessments included pre- and posttesting and questionnaires to score program quality, perceptions of the medical students and/or residents regarding the program, and impact on learning. In one half of these comparative studies, computer-assisted instruction was compared with traditional modes of teaching, such as text and lectures. Six studies compared performance before and after the computer-assisted instruction. Improvements were shown in 5 of the studies. In the remainder of the studies, computer-assisted instruction appeared to result in similar test performance. Despite study design or outcome, most articles described enthusiastic endorsement of the programs by the participants, including medical students, residents, and practicing physicians. Only 1 study included cost analysis. Thirteen of the articles were in obstetrics and gynecology. Computer-assisted instruction has assumed to have an increasing role in medical education. In spite of enthusiastic endorsement and continued improvements in software, few studies of good design clearly demonstrate improvement in medical education over traditional modalities. There are no comparative studies in obstetrics and gynecology that demonstrate a clear-cut advantage. Future studies of computer-assisted instruction that include comparisons and cost assessments to gauge their effectiveness over traditional methods may better define their precise role.

  18. An efficient nonlinear finite-difference approach in the computational modeling of the dynamics of a nonlinear diffusion-reaction equation in microbial ecology.

    PubMed

    Macías-Díaz, J E; Macías, Siegfried; Medina-Ramírez, I E

    2013-12-01

    In this manuscript, we present a computational model to approximate the solutions of a partial differential equation which describes the growth dynamics of microbial films. The numerical technique reported in this work is an explicit, nonlinear finite-difference methodology which is computationally implemented using Newton's method. Our scheme is compared numerically against an implicit, linear finite-difference discretization of the same partial differential equation, whose computer coding requires an implementation of the stabilized bi-conjugate gradient method. Our numerical results evince that the nonlinear approach results in a more efficient approximation to the solutions of the biofilm model considered, and demands less computer memory. Moreover, the positivity of initial profiles is preserved in the practice by the nonlinear scheme proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Structural elucidation of antihemorrhage drug molecule Diethylammonium 2,5-dihydroxybenzene sulfonate - an insilico approach

    NASA Astrophysics Data System (ADS)

    Kumar, S. Anil; Bhaskar, BL

    2018-02-01

    Ab-initio computational study of antihemorrhage drug molecule diethylammonium 2,5-dihydroxybenzene sulfonate, popularly known as ethamsylate, has been attempted using Gaussian 09. The optimized molecular geometry has been envisaged using density functional theory method at B3LYP/6-311 basis set. Different geometrical parameters like bond lengths and bond angles were computed and compared against the experimental results available in literature. Fourier transform infrared scanning of the title molecule was performed and vibrational frequencies were also computed using Gaussian software. The presence of O-H---O hydrogen bonds between C6H5O5S- anions and N-H---O hydrogen bonds between anion and cation is evident in the computational studies also. In general, satisfactory agreement of concordance has been observed between computational and experimental results.

  20. New approach to canonical partition functions computation in Nf=2 lattice QCD at finite baryon density

    NASA Astrophysics Data System (ADS)

    Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.

    2017-05-01

    We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.

  1. COMPARE : a method for analyzing investment alternatives in industrial wood and bark energy systems

    Treesearch

    Peter J. Ince

    1983-01-01

    COMPARE is a FORTRAN computer program resulting from a study to develop methods for comparative economic analysis of alternatives in industrial wood and bark energy systems. COMPARE provides complete guidelines for economic analysis of wood and bark energy systems. As such, COMPARE can be useful to those who have only basic familiarity with investment analysis of wood...

  2. Selection of a model of Earth's albedo radiation, practical calculation of its effect on the trajectory of a satellite

    NASA Technical Reports Server (NTRS)

    Walch, J. J.

    1985-01-01

    Theoretical models of Earth's albedo radiation was proposed. By comparing disturbing accelerations computed from a model to those measured in flight with the CACTUS Accelerometer, modified according to the results. Computation of the satellite orbit perturbations from a model is very long because for each position of this satellite the fluxes coming from each elementary surface of the terrestrial portion visible from the satellite must be summed. The speed of computation is increased ten times without significant loss of accuracy thanks to a stocking of some intermediate results. Now it is possible to confront the orbit perturbations computed from the selected model with the measurements of these perturbations found with satellite as LAGEOS.

  3. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  4. Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2015-01-01

    We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.

  5. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  6. Notebook computer use on a desk, lap and lap support: effects on posture, performance and comfort.

    PubMed

    Asundi, Krishna; Odell, Dan; Luce, Adam; Dennerlein, Jack T

    2010-01-01

    This study quantified postures of users working on a notebook computer situated in their lap and tested the effect of using a device designed to increase the height of the notebook when placed on the lap. A motion analysis system measured head, neck and upper extremity postures of 15 adults as they worked on a notebook computer placed on a desk (DESK), the lap (LAP) and a commercially available lapdesk (LAPDESK). Compared with the DESK, the LAP increased downwards head tilt 6 degrees and wrist extension 8 degrees . Shoulder flexion and ulnar deviation decreased 13 degrees and 9 degrees , respectively. Compared with the LAP, the LAPDESK decreased downwards head tilt 4 degrees , neck flexion 2 degrees , and wrist extension 9 degrees. Users reported less discomfort and difficulty in the DESK configuration. Use of the lapdesk improved postures compared with the lap; however, all configurations resulted in high values of wrist extension, wrist deviation and downwards head tilt. STATEMENT OF RELEVANCE: This study quantifies postures of users working with a notebook computer in typical portable configurations. A better understanding of the postures assumed during notebook computer use can improve usage guidelines to reduce the risk of musculoskeletal injuries.

  7. Noise studies of communication systems using the SYSTID computer aided analysis program

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Dawson, C. T.

    1973-01-01

    SYSTID computer aided design is a simple program for simulating data systems and communication links. A trial of the efficiency of the method was carried out by simulating a linear analog communication system to determine its noise performance and by comparing the SYSTID result with the result arrived at by theoretical calculation. It is shown that the SYSTID program is readily applicable to the analysis of these types of systems.

  8. Effects of Conjugation in Length and Dimension on Spectroscopic Properties of Fluorene-Based Chromophores from Experiment and Theory (Preprint)

    DTIC Science & Technology

    2006-07-01

    catalyzed iodo-bromo exchange reaction and (b) condensation of the resulting iodo-aldehyde precursor with 2-aminothiophenol in dimethylsulfoxide ( DMSO ). The...when compared to results from measurements carried out in a nonpolar solvent , which were available for some molecules. The computed oscillator... solvent , which were available for some molecules. The computed oscillator strengths may resolve discordant experimental values in some cases, for example

  9. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less

  10. Flexion-relaxation ratio in computer workers with and without chronic neck pain.

    PubMed

    Pinheiro, Carina Ferreira; dos Santos, Marina Foresti; Chaves, Thais Cristina

    2016-02-01

    This study evaluated the flexion-relaxation phenomenon (FRP) and flexion-relaxation ratios (FR-ratios) using surface electromyography (sEMG) of the cervical extensor muscles of computer workers with and without chronic neck pain, as well as of healthy subjects who were not computer users. This study comprised 60 subjects 20-45years of age, of which 20 were computer workers with chronic neck pain (CPG), 20 were computer workers without neck pain (NPG), and 20 were control individuals who do not use computers for work and use them less than 4h/day for other purposes (CG). FRP and FR-ratios were analyzed using sEMG of the cervical extensors. Analysis of FR-ratios showed smaller values in the semispinalis capitis muscles of the two groups of workers compared to the control group. The reference FR-ratio (flexion relaxation ratio [FRR], defined as the maximum activity in 1s of the re-extension/full flexion sEMG activity) was significantly higher in the computer workers with neck pain compared to the CG (CPG: 3.10, 95% confidence interval [CI95%] 2.50-3.70; NPG: 2.33, CI95% 1.93-2.74; CG: 1.99, CI95% 1.81-2.17; p<0.001). The FR-ratios and FRR of sEMG in this study suggested that computer use could increase recruitment of the semispinalis capitis during neck extension (concentric and eccentric phases), which could explain our results. These results also suggest that the FR-ratios of the semispinalis may be a potential functional predictive neuromuscular marker of asymptomatic neck musculoskeletal disorders since even asymptomatic computer workers showed altered values. On the other hand, the FRR values of the semispinalis capitis demonstrated a good discriminative ability to detect neck pain, and such results suggested that each FR-ratio could have a different application. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Development and validation of a two-dimensional fast-response flood estimation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R; Mcpherson, Timothy N; Burian, Steven J

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less

  12. Effect of correlated decay on fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Lemberger, B.; Yavuz, D. D.

    2017-12-01

    We analyze noise in the circuit model of quantum computers when the qubits are coupled to a common bosonic bath and discuss the possible failure of scalability of quantum computation. Specifically, we investigate correlated (super-radiant) decay between the qubit energy levels from a two- or three-dimensional array of qubits without imposing any restrictions on the size of the sample. We first show that regardless of how the spacing between the qubits compares with the emission wavelength, correlated decay produces errors outside the applicability of the threshold theorem. This is because the sum of the norms of the two-body interaction Hamiltonians (which can be viewed as the upper bound on the single-qubit error) that decoheres each qubit scales with the total number of qubits and is unbounded. We then discuss two related results: (1) We show that the actual error (instead of the upper bound) on each qubit scales with the number of qubits. As a result, in the limit of large number of qubits in the computer, N →∞ , correlated decay causes each qubit in the computer to decohere in ever shorter time scales. (2) We find the complete eigenvalue spectrum of the exchange Hamiltonian that causes correlated decay in the same limit. We show that the spread of the eigenvalue distribution grows faster with N compared to the spectrum of the unperturbed system Hamiltonian. As a result, as N →∞ , quantum evolution becomes completely dominated by the noise due to correlated decay. These results argue that scalable quantum computing may not be possible in the circuit model in a two- or three- dimensional geometry when the qubits are coupled to a common bosonic bath.

  13. Computer code for scattering from impedance bodies of revolution. Part 3: Surface impedance with s and phi variation. Analytical and numerical results

    NASA Technical Reports Server (NTRS)

    Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.

    1993-01-01

    The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.

  14. Electronic Circuit Analysis Language (ECAL)

    NASA Astrophysics Data System (ADS)

    Chenghang, C.

    1983-03-01

    The computer aided design technique is an important development in computer applications and it is an important component of computer science. The special language for electronic circuit analysis is the foundation of computer aided design or computer aided circuit analysis (abbreviated as CACD and CACA) of simulated circuits. Electronic circuit analysis language (ECAL) is a comparatively simple and easy to use circuit analysis special language which uses the FORTRAN language to carry out the explanatory executions. It is capable of conducting dc analysis, ac analysis, and transient analysis of a circuit. Futhermore, the results of the dc analysis can be used directly as the initial conditions for the ac and transient analyses.

  15. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Analytical prediction with multidimensional computer programs and experimental verification of the performance, at a variety of operating conditions, of two traveling wave tubes with depressed collectors

    NASA Technical Reports Server (NTRS)

    Dayton, J. A., Jr.; Kosmahl, H. G.; Ramins, P.; Stankiewicz, N.

    1979-01-01

    Experimental and analytical results are compared for two high performance, octave bandwidth TWT's that use depressed collectors (MDC's) to improve the efficiency. The computations were carried out with advanced, multidimensional computer programs that are described here in detail. These programs model the electron beam as a series of either disks or rings of charge and follow their multidimensional trajectories from the RF input of the ideal TWT, through the slow wave structure, through the magnetic refocusing system, to their points of impact in the depressed collector. Traveling wave tube performance, collector efficiency, and collector current distribution were computed and the results compared with measurements for a number of TWT-MDC systems. Power conservation and correct accounting of TWT and collector losses were observed. For the TWT's operating at saturation, very good agreement was obtained between the computed and measured collector efficiencies. For a TWT operating 3 and 6 dB below saturation, excellent agreement between computed and measured collector efficiencies was obtained in some cases but only fair agreement in others. However, deviations can largely be explained by small differences in the computed and actual spent beam energy distributions. The analytical tools used here appear to be sufficiently refined to design efficient collectors for this class of TWT. However, for maximum efficiency, some experimental optimization (e.g., collector voltages and aperture sizes) will most likely be required.

  17. Computational Study of the CC3 Impeller and Vaneless Diffuser Experiment

    NASA Technical Reports Server (NTRS)

    Kulkarni, Sameer; Beach, Timothy A.; Skoch, Gary J.

    2013-01-01

    Centrifugal compressors are compatible with the low exit corrected flows found in the high pressure compressor of turboshaft engines and may play an increasing role in turbofan engines as engine overall pressure ratios increase. Centrifugal compressor stages are difficult to model accurately with RANS CFD solvers. A computational study of the CC3 centrifugal impeller in its vaneless diffuser configuration was undertaken as part of an effort to understand potential causes of RANS CFD mis-prediction in these types of geometries. Three steady, periodic cases of the impeller and diffuser were modeled using the TURBO Parallel Version 4 code: 1) a k-epsilon turbulence model computation on a 6.8 million point grid using wall functions, 2) a k-epsilon turbulence model computation on a 14 million point grid integrating to the wall, and 3) a k-omega turbulence model computation on the 14 million point grid integrating to the wall. It was found that all three cases compared favorably to data from inlet to impeller trailing edge, but the k-epsilon and k-omega computations had disparate results beyond the trailing edge and into the vaneless diffuser. A large region of reversed flow was observed in the k-epsilon computations which extended from 70% to 100% span at the exit rating plane, whereas the k-omega computation had reversed flow from 95% to 100% span. Compared to experimental data at near-peak-efficiency, the reversed flow region in the k-epsilon case resulted in an under-prediction in adiabatic efficiency of 8.3 points, whereas the k-omega case was 1.2 points lower in efficiency.

  18. Computational Study of the CC3 Impeller and Vaneless Diffuser Experiment

    NASA Technical Reports Server (NTRS)

    Kulkarni, Sameer; Beach, Timothy A.; Skoch, Gary J.

    2013-01-01

    Centrifugal compressors are compatible with the low exit corrected flows found in the high pressure compressor of turboshaft engines and may play an increasing role in turbofan engines as engine overall pressure ratios increase. Centrifugal compressor stages are difficult to model accurately with RANS CFD solvers. A computational study of the CC3 centrifugal impeller in its vaneless diffuser configuration was undertaken as part of an effort to understand potential causes of RANS CFD mis-prediction in these types of geometries. Three steady, periodic cases of the impeller and diffuser were modeled using the TURBO Parallel Version 4 code: (1) a k-e turbulence model computation on a 6.8 million point grid using wall functions, (2) a k-e turbulence model computation on a 14 million point grid integrating to the wall, and (3) a k-? turbulence model computation on the 14 million point grid integrating to the wall. It was found that all three cases compared favorably to data from inlet to impeller trailing edge, but the k-e and k-? computations had disparate results beyond the trailing edge and into the vaneless diffuser. A large region of reversed flow was observed in the k-e computations which extended from 70 to 100 percent span at the exit rating plane, whereas the k-? computation had reversed flow from 95 to 100 percent span. Compared to experimental data at near-peak-efficiency, the reversed flow region in the k-e case resulted in an underprediction in adiabatic efficiency of 8.3 points, whereas the k-? case was 1.2 points lower in efficiency.

  19. Development of a Cross-Flow Fan Rotor for Vertical Take-Off and Landing Aircraft

    DTIC Science & Technology

    2013-06-01

    ANSYS CFX , along with the commercial computer-aided design software SolidWorks, was used to model and perform a parametric study on the number of rotor...the results found using ANSYS CFX . The experimental and analytical models were successfully compared at speeds ranging from 4,000 to 7,000 RPM...will make vertical take-off possible. The commercial computational fluid dynamics software ANSYS CFX , along with the commercial computer-aided design

  20. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    1999-10-26

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  1. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    2001-06-05

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  2. Comparative randomised active drug controlled clinical trial of a herbal eye drop in computer vision syndrome.

    PubMed

    Chatterjee, Pranab Kr; Bairagi, Debasis; Roy, Sudipta; Majumder, Nilay Kr; Paul, Ratish Ch; Bagchi, Sunil Ch

    2005-07-01

    A comparative double-blind placebo-controlled clinical trial of a herbal eye drop (itone) was conducted to find out its efficacy and safety in 120 patients with computer vision syndrome. Patients using computers for more than 3 hours continuously per day having symptoms of watering, redness, asthenia, irritation, foreign body sensation and signs of conjunctival hyperaemia, corneal filaments and mucus were studied. One hundred and twenty patients were randomly given either placebo, tears substitute (tears plus) or itone in identical vials with specific code number and were instructed to put one drop four times daily for 6 weeks. Subjective and objective assessments were done at bi-weekly intervals. In computer vision syndrome both subjective and objective improvements were noticed with itone drops. Itone drop was found significantly better than placebo (p<0.01) and almost identical results were observed with tears plus (difference was not statistically significant). Itone is considered to be a useful drug in computer vision syndrome.

  3. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  4. Effect of Heliox on Respiratory Outcomes during Rigid Bronchoscopy in Term Lambs.

    PubMed

    Sowder, Justin C; Dahl, Mar Janna; Zuspan, Kaitlin R; Albertine, Kurt H; Null, Donald M; Barneck, Mitchell D; Grimmer, J Fredrik

    2018-03-01

    Objective To (1) compare physiologic changes during rigid bronchoscopy during spontaneous and mechanical ventilation and (2) evaluate the efficacy of a helium-oxygen (heliox) gas mixture as compared with room air during rigid bronchoscopy. Study Design Crossover animal study evaluating physiologic parameters during rigid bronchoscopy. Outcomes were compared with predicted computational fluid analysis. Setting Simulated ventilation via computational fluid dynamics analysis and term lambs undergoing rigid bronchoscopy. Methods Respiratory and physiologic outcomes were analyzed in a lamb model simulating bronchoscopy during foreign body aspiration to compare heliox with room air. The main outcome measures were blood oxygen saturation, heart rate, blood pressure, partial pressure of oxygen, and partial pressure of carbon dioxide. Computational fluid dynamics analysis was performed with SOLIDWORKS within a rigid pediatric bronchoscope during simulated ventilation comparing heliox with room air. Results For room air, lambs desaturated within 3 minutes during mechanical ventilation versus normal oxygen saturation during spontaneous ventilation ( P = .01). No improvement in respiratory outcomes was seen between heliox and room air during mechanical ventilation. Computational fluid dynamics analysis demonstrates increased turbulence within size 3.5 bronchoscopes when comparing heliox and room air. Meaningful comparisons could not be made due to the intolerance of the lambs to heliox in vivo. Conclusion During mechanical ventilation on room air, lambs desaturate more quickly during rigid bronchoscopy on settings that should be adequate. Heliox does not improve ventilation during rigid bronchoscopy.

  5. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    PubMed

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R; Nestle, Frank O

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  6. Integration of Lyoplate Based Flow Cytometry and Computational Analysis for Standardized Immunological Biomarker Discovery

    PubMed Central

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A. Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R.; Nestle, Frank O.

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases. PMID:23843942

  7. Blind topological measurement-based quantum computation.

    PubMed

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-01-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3 × 10(-3), which is comparable to that (7.5 × 10(-3)) of non-blind topological quantum computation. As the error per gate of the order 10(-3) was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  8. Blind topological measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-09-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3×10-3, which is comparable to that (7.5×10-3) of non-blind topological quantum computation. As the error per gate of the order 10-3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  9. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  10. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  11. CFD Computations for a Generic High-Lift Configuration Using TetrUSS

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Abdol-Hamid, Khaled S.; Parlette, Edward B.

    2011-01-01

    Assessment of the accuracy of computational results for a generic high-lift trapezoidal wing with a single slotted flap and slat is presented. The paper is closely aligned with the focus of the 1st AIAA CFD High Lift Prediction Workshop (HiLiftPW-1) which was to assess the accuracy of CFD methods for multi-element high-lift configurations. The unstructured grid Reynolds-Averaged Navier-Stokes solver TetrUSS/USM3D is used for the computational results. USM3D results are obtained assuming fully turbulent flow using the Spalart-Allmaras (SA) and Shear Stress Transport (SST) turbulence models. Computed solutions have been obtained at seven different angles-of-attack ranging from 6 -37 . Three grids providing progressively higher grid resolution are used to quantify the effect of grid resolution on the lift, drag, pitching moment, surface pressure and stall angle. SA results, as compared to SST results, exhibit better agreement with the measured data. However, both turbulence models under-predict upper surface pressures near the wing tip region.

  12. The research of collapsibility test and FEA of collapse deformation in loess collapsible under overburden pressure

    NASA Astrophysics Data System (ADS)

    yu, Zhang; hui, Li; guibo, Bao; wuyu, Zhang; ningshan, Jiang; xiaoyun, Yang

    2018-05-01

    The collapsibility test in field may have huge error with computed results[1-4]. The writer gave a compare between single-line and double-line method and then compared with the field’s result. The writer’s purpose is to reduce the error of measured value to computed value and propose a way to decrease the error through consider the matric suction’s influence to unsaturated soil in using finite element analysis, field test was completed to verify the reasonability of this method and get some regulate of development of collapse deformation and supply some calculation basis of engineering design and forecast in emergency situation.

  13. Evaluation of MOSTAS computer code for predicting dynamic loads in two-bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads are compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-0 wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multiblade coordinate transformation for two-bladed rotors to solve the equations of motion by standard eigenanalysis. The results obtained with this approximate analysis do not agree with dynamic blade load amplifications at or close to resonance conditions. The results of the second version, which accounts for periodic coefficients while solving the equations by a time history integration, compare well with the measured data.

  14. An analysis of the viscous flow through a compact radial turbine by the average passage approach

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Beach, Timothy A.

    1990-01-01

    A steady, three-dimensional viscous average passage computer code is used to analyze the flow through a compact radial turbine rotor. The code models the flow as spatially periodic from blade passage to blade passage. Results from the code using varying computational models are compared with each other and with experimental data. These results include blade surface velocities and pressures, exit vorticity and entropy contour plots, shroud pressures, and spanwise exit total temperature, total pressure, and swirl distributions. The three computational models used are inviscid, viscous with no blade clearance, and viscous with blade clearance. It is found that modeling viscous effects improves correlation with experimental data, while modeling hub and tip clearances further improves some comparisons. Experimental results such as a local maximum of exit swirl, reduced exit total pressures at the walls, and exit total temperature magnitudes are explained by interpretation of the flow physics and computed secondary flows. Trends in the computed blade loading diagrams are similarly explained.

  15. Computer-aided light sheet flow visualization using photogrammetry

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1994-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.

  16. A Computing Method for Sound Propagation Through a Nonuniform Jet Stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    Understanding the principles of jet noise propagation is an essential ingredient of systematic noise reduction research. High speed computer methods offer a unique potential for dealing with complex real life physical systems whereas analytical solutions are restricted to sophisticated idealized models. The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions and a more suitable approach was needed. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  17. Measurement and computer simulation of antennas on ships and aircraft for results of operational reliability

    NASA Astrophysics Data System (ADS)

    Kubina, Stanley J.

    1989-09-01

    The review of the status of computational electromagnetics by Miller and the exposition by Burke of the developments in one of the more important computer codes in the application of the electric field integral equation method, the Numerical Electromagnetic Code (NEC), coupled with Molinet's summary of progress in techniques based on the Geometrical Theory of Diffraction (GTD), provide a clear perspective on the maturity of the modern discipline of computational electromagnetics and its potential. Audone's exposition of the application to the computation of Radar Scattering Cross-section (RCS) is an indication of the breadth of practical applications and his exploitation of modern near-field measurement techniques reminds one of progress in the measurement discipline which is essential to the validation or calibration of computational modeling methodology when applied to complex structures such as aircraft and ships. The latter monograph also presents some comparison results with computational models. Some of the results presented for scale model and flight measurements show some serious disagreements in the lobe structure which would require some detailed examination. This also applies to the radiation patterns obtained by flight measurement compared with those obtained using wire-grid models and integral equation modeling methods. In the examples which follow, an attempt is made to match measurements results completely over the entire 2 to 30 MHz HF range for antennas on a large patrol aircraft. The problem of validating computer models of HF antennas on a helicopter and using computer models to generate radiation pattern information which cannot be obtained by measurements are discussed. The use of NEC computer models to analyze top-side ship configurations where measurement results are not available and only self-validation measures are available or at best comparisons with an alternate GTD computer modeling technique is also discussed.

  18. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  19. Comparative Study Of Four Models Of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, Florian R.

    1996-01-01

    Report presents comparative study of four popular eddy-viscosity models of turbulence. Computations reported for three different adverse pressure-gradient flowfields. Detailed comparison of numerical results and experimental data given. Following models tested: Baldwin-Lomax, Johnson-King, Baldwin-Barth, and Wilcox.

  20. NEUROBIOLOGY OF ECONOMIC CHOICE: A GOOD-BASED MODEL

    PubMed Central

    Padoa-Schioppa, Camillo

    2012-01-01

    Traditionally the object of economic theory and experimental psychology, economic choice recently became a lively research focus in systems neuroscience. Here I summarize the emerging results and I propose a unifying model of how economic choice might function at the neural level. Economic choice entails comparing options that vary on multiple dimensions. Hence, while choosing, individuals integrate different determinants into a subjective value; decisions are then made by comparing values. According to the good-based model, the values of different goods are computed independently of one another, which implies transitivity. Values are not learned as such, but rather computed at the time of choice. Most importantly, values are compared within the space of goods, independent of the sensori-motor contingencies of choice. Evidence from neurophysiology, imaging and lesion studies indicates that abstract representations of value exist in the orbitofrontal and ventromedial prefrontal cortices. The computation and comparison of values may thus take place within these regions. PMID:21456961

  1. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    NASA Astrophysics Data System (ADS)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  2. An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Anerson, W. Kyle; Bonhaus, Daryl L.

    1994-01-01

    An implicit, Navier-Stokes solution algorithm is presented for the computation of turbulent flow on unstructured grids. The inviscid fluxes are computed using an upwind algorithm and the solution is advanced in time using a backward-Euler time-stepping scheme. At each time step, the linear system of equations is approximately solved with a point-implicit relaxation scheme. This methodology provides a viable and robust algorithm for computing turbulent flows on unstructured meshes. Results are shown for subsonic flow over a NACA 0012 airfoil and for transonic flow over a RAE 2822 airfoil exhibiting a strong upper-surface shock. In addition, results are shown for 3 element and 4 element airfoil configurations. For the calculations, two one equation turbulence models are utilized. For the NACA 0012 airfoil, a pressure distribution and force data are compared with other computational results as well as with experiment. Comparisons of computed pressure distributions and velocity profiles with experimental data are shown for the RAE airfoil and for the 3 element configuration. For the 4 element case, comparisons of surface pressure distributions with experiment are made. In general, the agreement between the computations and the experiment is good.

  3. A Craniomaxillofacial Surgical Assistance Workstation for Enhanced Single-Stage Reconstruction Using Patient-Specific Implants.

    PubMed

    Murphy, Ryan J; Liacouras, Peter C; Grant, Gerald T; Wolfe, Kevin C; Armand, Mehran; Gordon, Chad R

    2016-11-01

    Craniomaxillofacial reconstruction with patient-specific, customized craniofacial implants (CCIs) is ideal for skeletal defects involving areas of aesthetic concern-the non-weight-bearing facial skeleton, temporal skull, and/or frontal-forehead region. Results to date are superior to a variety of "off-the-shelf" materials, but require a protocol computed tomography scan and preexisting defect for computer-assisted design/computer-assisted manufacturing of the CCI. The authors developed a craniomaxillofacial surgical assistance workstation to address these challenges and intraoperatively guide CCI modification for an unknown defect size/shape. First, the surgeon designed an oversized CCI based on his/her surgical plan. Intraoperatively, the surgeon resected the bone and digitized the resection using a navigation pointer. Next, a projector displayed the limits of the craniofacial bone defect onto the prefabricated, oversized CCI for the size modification process; the surgeon followed the projected trace to modify the implant. A cadaveric study compared the standard technique (n = 1) to the experimental technique (n = 5) using surgical time and implant fit. The technology reduced the time and effort needed to resize the oversized CCI by an order of magnitude as compared with the standard manual resizing process. Implant fit was consistently better for the computer-assisted case compared with the control by at least 30%, requiring only 5.17 minutes in the computer-assisted cases compared with 35 minutes for the control. This approach demonstrated improvement in surgical time and accuracy of CCI-based craniomaxillofacial reconstruction compared with previously reported methods. The craniomaxillofacial surgical assistance workstation will provide craniofacial surgeons a computer-assisted technology for effective and efficient single-stage reconstruction when exact craniofacial bone defect sizes are unknown.

  4. Experimental and computational models of neurite extension at a choice point in response to controlled diffusive gradients

    NASA Astrophysics Data System (ADS)

    Catig, G. C.; Figueroa, S.; Moore, M. J.

    2015-08-01

    Ojective. Axons are guided toward desired targets through a series of choice points that they navigate by sensing cues in the cellular environment. A better understanding of how microenvironmental factors influence neurite growth during development can inform strategies to address nerve injury. Therefore, there is a need for biomimetic models to systematically investigate the influence of guidance cues at such choice points. Approach. We ran an adapted in silico biased turning axon growth model under the influence of nerve growth factor (NGF) and compared the results to corresponding in vitro experiments. We examined if growth simulations were predictive of neurite population behavior at a choice point. We used a biphasic micropatterned hydrogel system consisting of an outer cell restrictive mold that enclosed a bifurcated cell permissive region and placed a well near a bifurcating end to allow proteins to diffuse and form a gradient. Experimental diffusion profiles in these constructs were used to validate a diffusion computational model that utilized experimentally measured diffusion coefficients in hydrogels. The computational diffusion model was then used to establish defined soluble gradients within the permissive region of the hydrogels and maintain the profiles in physiological ranges for an extended period of time. Computational diffusion profiles informed the neurite growth model, which was compared with neurite growth experiments in the bifurcating hydrogel constructs. Main results. Results indicated that when applied to the constrained choice point geometry, the biased turning model predicted experimental behavior closely. Results for both simulated and in vitro neurite growth studies showed a significant chemoattractive response toward the bifurcated end containing an NGF gradient compared to the control, though some neurites were found in the end with no NGF gradient. Significance. The integrated model of neurite growth we describe will allow comparison of experimental studies against growth cone guidance computational models applied to axon pathfinding at choice points.

  5. Adaptive computations of multispecies mixing between scramjet nozzle flows and hypersonic freestream

    NASA Technical Reports Server (NTRS)

    Baysa, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.; Pittman, James L.

    1989-01-01

    The objective of this paper is to compute the expansion of a supersonic flow through an internal-external nozzle and its viscous mixing with the hypersonic flow of air. The supersonic jet may be that of a multispecies gas other than air. Calculations are performed for one case where both flows are those of air, and another case where a mixture of freon-12 and argon is discharged supersonically to mix with the hypersonic airflow. Comparisons are made between these two cases with respect to gas compositions, and fixed versus flow-adaptive grids. All the computational results are compared successfully with the wind-tunnel tests results.

  6. An efficient computational method for the approximate solution of nonlinear Lane-Emden type equations arising in astrophysics

    NASA Astrophysics Data System (ADS)

    Singh, Harendra

    2018-04-01

    The key purpose of this article is to introduce an efficient computational method for the approximate solution of the homogeneous as well as non-homogeneous nonlinear Lane-Emden type equations. Using proposed computational method given nonlinear equation is converted into a set of nonlinear algebraic equations whose solution gives the approximate solution to the Lane-Emden type equation. Various nonlinear cases of Lane-Emden type equations like standard Lane-Emden equation, the isothermal gas spheres equation and white-dwarf equation are discussed. Results are compared with some well-known numerical methods and it is observed that our results are more accurate.

  7. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  8. Computed tomography or rhinoscopy as the first-line procedure for suspected nasal tumor: a pilot study.

    PubMed

    Finck, Marlène; Ponce, Frédérique; Guilbaud, Laurent; Chervier, Cindy; Floch, Franck; Cadoré, Jean-Luc; Chuzel, Thomas; Hugonnard, Marine

    2015-02-01

    There are no evidence-based guidelines as to whether computed tomography (CT) or endoscopy should be selected as the first-line procedure when a nasal tumor is suspected in a dog or a cat and only one examination can be performed. Computed tomography and rhinoscopic features of 17 dogs and 5 cats with a histopathologically or cytologically confirmed nasal tumor were retrospectively reviewed. The level of suspicion for nasal neoplasia after CT and/or rhinoscopy was compared to the definitive diagnosis. Twelve animals underwent CT, 14 underwent rhinoscopy, and 4 both examinations. Of the 12 CT examinations performed, 11 (92%) resulted in the conclusion that a nasal tumor was the most likely diagnosis compared with 9/14 (64%) for rhinoscopies. Computed tomography appeared to be more reliable than rhinoscopy for detecting nasal tumors and should therefore be considered as the first-line procedure.

  9. Computed tomography or rhinoscopy as the first-line procedure for suspected nasal tumor: A pilot study

    PubMed Central

    Finck, Marlène; Ponce, Frédérique; Guilbaud, Laurent; Chervier, Cindy; Floch, Franck; Cadoré, Jean-Luc; Chuzel, Thomas; Hugonnard, Marine

    2015-01-01

    There are no evidence-based guidelines as to whether computed tomography (CT) or endoscopy should be selected as the first-line procedure when a nasal tumor is suspected in a dog or a cat and only one examination can be performed. Computed tomography and rhinoscopic features of 17 dogs and 5 cats with a histopathologically or cytologically confirmed nasal tumor were retrospectively reviewed. The level of suspicion for nasal neoplasia after CT and/or rhinoscopy was compared to the definitive diagnosis. Twelve animals underwent CT, 14 underwent rhinoscopy, and 4 both examinations. Of the 12 CT examinations performed, 11 (92%) resulted in the conclusion that a nasal tumor was the most likely diagnosis compared with 9/14 (64%) for rhinoscopies. Computed tomography appeared to be more reliable than rhinoscopy for detecting nasal tumors and should therefore be considered as the first-line procedure. PMID:25694669

  10. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 3: Program correlation with full scale hardware tests

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Rosenlieb, J. W.; Dyba, G.

    1980-01-01

    The results of a series of full scale hardware tests comparing predictions of the SPHERBEAN computer program with measured data are presented. The SPHERBEAN program predicts the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings. The degree of correlation between performance predicted by SPHERBEAN and measured data is demonstrated. Experimental and calculated performance data is compared over a range in speed up to 19,400 rpm (0.8 MDN) under pure radial, pure axial, and combined loads.

  11. Comparing the Similarity of Responses Received from Studies in Amazon’s Mechanical Turk to Studies Conducted Online and with Direct Recruitment

    PubMed Central

    Bartneck, Christoph; Duenser, Andreas; Moltchanova, Elena; Zawieska, Karolina

    2015-01-01

    Computer and internet based questionnaires have become a standard tool in Human-Computer Interaction research and other related fields, such as psychology and sociology. Amazon’s Mechanical Turk (AMT) service is a new method of recruiting participants and conducting certain types of experiments. This study compares whether participants recruited through AMT give different responses than participants recruited through an online forum or recruited directly on a university campus. Moreover, we compare whether a study conducted within AMT results in different responses compared to a study for which participants are recruited through AMT but which is conducted using an external online questionnaire service. The results of this study show that there is a statistical difference between results obtained from participants recruited through AMT compared to the results from the participant recruited on campus or through online forums. We do, however, argue that this difference is so small that it has no practical consequence. There was no significant difference between running the study within AMT compared to running it with an online questionnaire service. There was no significant difference between results obtained directly from within AMT compared to results obtained in the campus and online forum condition. This may suggest that AMT is a viable and economical option for recruiting participants and for conducting studies as setting up and running a study with AMT generally requires less effort and time compared to other frequently used methods. We discuss our findings as well as limitations of using AMT for empirical studies. PMID:25876027

  12. Comparing the similarity of responses received from studies in Amazon's Mechanical Turk to studies conducted online and with direct recruitment.

    PubMed

    Bartneck, Christoph; Duenser, Andreas; Moltchanova, Elena; Zawieska, Karolina

    2015-01-01

    Computer and internet based questionnaires have become a standard tool in Human-Computer Interaction research and other related fields, such as psychology and sociology. Amazon's Mechanical Turk (AMT) service is a new method of recruiting participants and conducting certain types of experiments. This study compares whether participants recruited through AMT give different responses than participants recruited through an online forum or recruited directly on a university campus. Moreover, we compare whether a study conducted within AMT results in different responses compared to a study for which participants are recruited through AMT but which is conducted using an external online questionnaire service. The results of this study show that there is a statistical difference between results obtained from participants recruited through AMT compared to the results from the participant recruited on campus or through online forums. We do, however, argue that this difference is so small that it has no practical consequence. There was no significant difference between running the study within AMT compared to running it with an online questionnaire service. There was no significant difference between results obtained directly from within AMT compared to results obtained in the campus and online forum condition. This may suggest that AMT is a viable and economical option for recruiting participants and for conducting studies as setting up and running a study with AMT generally requires less effort and time compared to other frequently used methods. We discuss our findings as well as limitations of using AMT for empirical studies.

  13. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  14. A computational study on oblique shock wave-turbulent boundary layer interaction

    NASA Astrophysics Data System (ADS)

    Joy, Md. Saddam Hossain; Rahman, Saeedur; Hasan, A. B. M. Toufique; Ali, M.; Mitsutake, Y.; Matsuo, S.; Setoguchi, T.

    2016-07-01

    A numerical computation of an oblique shock wave incident on a turbulent boundary layer was performed for free stream flow of air at M∞ = 2.0 and Re1 = 10.5×106 m-1. The oblique shock wave was generated from a 8° wedge. Reynolds averaged Navier-Stokes (RANS) simulation with k-ω SST turbulence model was first utilized for two dimensional (2D) steady case. The results were compared with the experiment at the same flow conditions. Further, to capture the unsteadiness, a 2D Large Eddy Simulation (LES) with sub-grid scale model WMLES was performed which showed the unsteady effects. The frequency of the shock oscillation was computed and was found to be comparable with that of experimental measurement.

  15. A Two Colorable Fourth Order Compact Difference Scheme and Parallel Iterative Solution of the 3D Convection Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Zhang, Jun; Ge, Lixin; Kouatchou, Jules

    2000-01-01

    A new fourth order compact difference scheme for the three dimensional convection diffusion equation with variable coefficients is presented. The novelty of this new difference scheme is that it Only requires 15 grid points and that it can be decoupled with two colors. The entire computational grid can be updated in two parallel subsweeps with the Gauss-Seidel type iterative method. This is compared with the known 19 point fourth order compact differenCe scheme which requires four colors to decouple the computational grid. Numerical results, with multigrid methods implemented on a shared memory parallel computer, are presented to compare the 15 point and the 19 point fourth order compact schemes.

  16. Comparing Postsecondary Marketing Student Performance on Computer-Based and Handwritten Essay Tests

    ERIC Educational Resources Information Center

    Truell, Allen D.; Alexander, Melody W.; Davis, Rodney E.

    2004-01-01

    The purpose of this study was to determine if there were differences in postsecondary marketing student performance on essay tests based on test format (i.e., computer-based or handwritten). Specifically, the variables of performance, test completion time, and gender were explored for differences based on essay test format. Results of the study…

  17. Adaptive Decision Aiding in Computer-Assisted Instruction: Adaptive Computerized Training System (ACTS).

    ERIC Educational Resources Information Center

    Hopf-Weichel, Rosemarie; And Others

    This report describes results of the first year of a three-year program to develop and evaluate a new Adaptive Computerized Training System (ACTS) for electronics maintenance training. (ACTS incorporates an adaptive computer program that learns the student's diagnostic and decision value structure, compares it to that of an expert, and adapts the…

  18. Three-dimensional Computational Fluid Dynamics Investigation of a Spinning Helicopter Slung Load

    NASA Technical Reports Server (NTRS)

    Theorn, J. N.; Duque, E. P. N.; Cicolani, L.; Halsey, R.

    2005-01-01

    After performing steady-state Computational Fluid Dynamics (CFD) calculations using OVERFLOW to validate the CFD method against static wind-tunnel data of a box-shaped cargo container, the same setup was used to investigate unsteady flow with a moving body. Results were compared to flight test data previously collected in which the container is spinning.

  19. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    ERIC Educational Resources Information Center

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  20. Critical Thinking Outcomes of Computer-Assisted Instruction versus Written Nursing Process.

    ERIC Educational Resources Information Center

    Saucier, Bonnie L.; Stevens, Kathleen R.; Williams, Gail B.

    2000-01-01

    Nursing students (n=43) who used clinical case studies via computer-assisted instruction (CAI) were compared with 37 who used the written nursing process (WNP). California Critical Thinking Skills Test results did not show significant increases in critical thinking. The WNP method was more time consuming; the CAI group was more satisfied. Use of…

  1. Efficiency of including first-generation information in second-generation ranking and selection: results of computer simulation.

    Treesearch

    T.Z. Ye; K.J.S. Jayawickrama; G.R. Johnson

    2006-01-01

    Using computer simulation, we evaluated the impact of using first-generation information to increase selection efficiency in a second-generation breeding program. Selection efficiency was compared in terms of increase in rank correlation between estimated and true breeding values (i.e., ranking accuracy), reduction in coefficient of variation of correlation...

  2. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    ERIC Educational Resources Information Center

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  3. Evaluating the Comparability of Paper- and Computer-Based Science Tests across Sex and SES Subgroups

    ERIC Educational Resources Information Center

    Randall, Jennifer; Sireci, Stephen; Li, Xueming; Kaira, Leah

    2012-01-01

    As access and reliance on technology continue to increase, so does the use of computerized testing for admissions, licensure/certification, and accountability exams. Nonetheless, full computer-based test (CBT) implementation can be difficult due to limited resources. As a result, some testing programs offer both CBT and paper-based test (PBT)…

  4. The Effects of Cooperative and Individualistic Learning Structures on Achievement in a College-Level Computer-Aided Drafting Course

    ERIC Educational Resources Information Center

    Swab, A. Geoffrey

    2012-01-01

    This study of cooperative learning in post-secondary engineering education investigated achievement of engineering students enrolled in two intact sections of a computer-aided drafting (CAD) course. Quasi-experimental and qualitative methods were employed in comparing student achievement resulting from out-of-class cooperative and individualistic…

  5. PISA 2012: How Do Results for the Paper and Computer Tests Compare?

    ERIC Educational Resources Information Center

    Jerrim, John

    2016-01-01

    The Programme for International Assessment (PISA) is an important cross-national study of 15-year olds academic achievement. Although it has traditionally been conducted using paper-and-pencil tests, the vast majority of countries will use computer-based assessment from 2015. In this paper, we consider how cross-country comparisons of children's…

  6. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  7. Flowfield Comparisons from Three Navier-Stokes Solvers for an Axisymmetric Separate Flow Jet

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James; Khavaran, Abbas

    2002-01-01

    To meet new noise reduction goals, many concepts to enhance mixing in the exhaust jets of turbofan engines are being studied. Accurate steady state flowfield predictions from state-of-the-art computational fluid dynamics (CFD) solvers are needed as input to the latest noise prediction codes. The main intent of this paper was to ascertain that similar Navier-Stokes solvers run at different sites would yield comparable results for an axisymmetric two-stream nozzle case. Predictions from the WIND and the NPARC codes are compared to previously reported experimental data and results from the CRAFT Navier-Stokes solver. Similar k-epsilon turbulence models were employed in each solver, and identical computational grids were used. Agreement between experimental data and predictions from each code was generally good for mean values. All three codes underpredict the maximum value of turbulent kinetic energy. The predicted locations of the maximum turbulent kinetic energy were farther downstream than seen in the data. A grid study was conducted using the WIND code, and comments about convergence criteria and grid requirements for CFD solutions to be used as input for noise prediction computations are given. Additionally, noise predictions from the MGBK code, using the CFD results from the CRAFT code, NPARC, and WIND as input are compared to data.

  8. Evaluation of Airframe Noise Reduction Concepts via Simulations Using a Lattice Boltzmann Approach

    NASA Technical Reports Server (NTRS)

    Fares, Ehab; Casalino, Damiano; Khorrami, Mehdi R.

    2015-01-01

    Unsteady computations are presented for a high-fidelity, 18% scale, semi-span Gulfstream aircraft model in landing configuration, i.e. flap deflected at 39 degree and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW® to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. In addition to the baseline geometry, which was presented previously, various noise reduction concepts for the flap and main landing gear are simulated. In particular, care is taken to fully resolve the complex geometrical details associated with these concepts in order to capture the resulting intricate local flow field thus enabling accurate prediction of their acoustic behavior. To determine aeroacoustic performance, the farfield noise predicted with the concepts applied is compared to high-fidelity simulations of the untreated baseline configurations. To assess the accuracy of the computed results, the aerodynamic and aeroacoustic impact of the noise reduction concepts is evaluated numerically and compared to experimental results for the same model. The trends and effectiveness of the simulated noise reduction concepts compare well with measured values and demonstrate that the computational approach is capable of capturing the primary effects of the acoustic treatment on a full aircraft model.

  9. 1-Amino-4-hydroxy-9,10-anthraquinone - An analogue of anthracycline anticancer drugs, interacts with DNA and induces apoptosis in human MDA-MB-231 breast adinocarcinoma cells: Evaluation of structure-activity relationship using computational, spectroscopic and biochemical studies.

    PubMed

    Mondal, Palash; Roy, Sanjay; Loganathan, Gayathri; Mandal, Bitapi; Dharumadurai, Dhanasekaran; Akbarsha, Mohammad A; Sengupta, Partha Sarathi; Chattopadhyay, Shouvik; Guin, Partha Sarathi

    2015-12-01

    The X-ray diffraction and spectroscopic properties of 1-amino-4-hydroxy-9,10-anthraquinone (1-AHAQ), a simple analogue of anthracycline chemotherapeutic drugs were studied by adopting experimental and computational methods. The optimized geometrical parameters obtained from computational methods were compared with the results of X-ray diffraction analysis and the two were found to be in reasonably good agreement. X-ray diffraction study, Density Functional Theory (DFT) and natural bond orbital (NBO) analysis indicated two types of hydrogen bonds in the molecule. The IR spectra of 1-AHAQ were studied by Vibrational Energy Distribution Analysis (VEDA) using potential energy distribution (PED) analysis. The electronic spectra were studied by TDDFT computation and compared with the experimental results. Experimental and theoretical results corroborated each other to a fair extent. To understand the biological efficacy of 1-AHAQ, it was allowed to interact with calf thymus DNA and human breast adino-carcinoma cell MDA-MB-231. It was found that the molecule induces apoptosis in this adinocarcinoma cell, with little, if any, cytotoxic effect in HBL-100 normal breast epithelial cell.

  10. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    PubMed

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P < 0.001), with no significant difference between the textbook and 3D computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning. © 2013 American Association of Anatomists.

  11. A Computational Icing Effects Study for a Three-Dimensional Wing: Comparison with Experimental Data and Investigation of Spanwise Variation

    NASA Technical Reports Server (NTRS)

    Thompson, D.; Mogili, P.; Chalasani, S.; Addy, H.; Choo, Y.

    2004-01-01

    Steady-state solutions of the Reynolds-averaged Navier-Stokes (RANS) equations were computed using the Colbalt flow solver for a constant-section, rectangular wing based on an extruded two-dimensional glaze ice shape. The one equation Spalart-Allmaras turbulence model was used. The results were compared with data obtained from a recent wind tunnel test. Computed results indicate that the steady RANS solutions do not accurately capture the recirculating region downstream of the ice accretion, even after a mesh refinement. The resulting predicted reattachment is farther downstream than indicated by the experimental data. Additionally, the solutions computed on a relatively coarse baseline mesh had detailed flow characteristics that were different from those computed on the refined mesh or the experimental data. Steady RANS solutions were also computed to investigate the effects of spanwise variation in the ice shape. The spanwise variation was obtained via a bleeding function that merged the ice shape with the clean wing using a sinusoidal spanwise variation. For these configurations, the results predicted for the extruded shape provided conservative estimates for the performance degradation of the wing. Additionally, the spanwise variation in the ice shape and the resulting differences in the flow fields did not significantly change the location of the primary reattachment.

  12. Viscous computations of cold air/air flow around scramjet nozzle afterbody

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Engelund, Walter C.

    1991-01-01

    The flow field in and around the nozzle afterbody section of a hypersonic vehicle was computationally simulated. The compressible, Reynolds averaged, Navier Stokes equations were solved by an implicit, finite volume, characteristic based method. The computational grids were adapted to the flow as the solutions were developing in order to improve the accuracy. The exhaust gases were assumed to be cold. The computational results were obtained for the two dimensional longitudinal plane located at the half span of the internal portion of the nozzle for over expanded and under expanded conditions. Another set of results were obtained, where the three dimensional simulations were performed for a half span nozzle. The surface pressures were successfully compared with the data obtained from the wind tunnel tests. The results help in understanding this complex flow field and, in turn, should help the design of the nozzle afterbody section.

  13. Brain architecture: a design for natural computation.

    PubMed

    Kaiser, Marcus

    2007-12-15

    Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.

  14. CUDA-based real time surgery simulation.

    PubMed

    Liu, Youquan; De, Suvranu

    2008-01-01

    In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.

  15. Computer analysis of railcar vibrations

    NASA Technical Reports Server (NTRS)

    Vlaminck, R. R.

    1975-01-01

    Computer models and techniques for calculating railcar vibrations are discussed along with criteria for vehicle ride optimization. The effect on vibration of car body structural dynamics, suspension system parameters, vehicle geometry, and wheel and rail excitation are presented. Ride quality vibration data collected on the state-of-the-art car and standard light rail vehicle is compared to computer predictions. The results show that computer analysis of the vehicle can be performed for relatively low cost in short periods of time. The analysis permits optimization of the design as it progresses and minimizes the possibility of excessive vibration on production vehicles.

  16. A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm

    PubMed Central

    Chen, Jui-Le; Yang, Chu-Sing

    2013-01-01

    The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864

  17. OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux

    NASA Astrophysics Data System (ADS)

    Gunawan, P. H.; Pahlevi, M. R.

    2018-03-01

    The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.

  18. Unsteady Aerodynamic Validation Experiences From the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chawlowski, Pawel

    2014-01-01

    The AIAA Aeroelastic Prediction Workshop (AePW) was held in April 2012, bringing together communities of aeroelasticians, computational fluid dynamicists and experimentalists. The extended objective was to assess the state of the art in computational aeroelastic methods as practical tools for the prediction of static and dynamic aeroelastic phenomena. As a step in this process, workshop participants analyzed unsteady aerodynamic and weakly-coupled aeroelastic cases. Forced oscillation and unforced system experiments and computations have been compared for three configurations. This paper emphasizes interpretation of the experimental data, computational results and their comparisons from the perspective of validation of unsteady system predictions. The issues examined in detail are variability introduced by input choices for the computations, post-processing, and static aeroelastic modeling. The final issue addressed is interpreting unsteady information that is present in experimental data that is assumed to be steady, and the resulting consequences on the comparison data sets.

  19. Chamber dimensions and functional assessment with coronary computed tomographic angiography as compared to echocardiography using American Society of Echocardiography guidelines

    PubMed Central

    Rose, Michael; Rubal, Bernard; Hulten, Edward; Slim, Jennifer N; Steel, Kevin; Furgerson, James L; Villines, Todd C

    2014-01-01

    Background: The correlation between normal cardiac chamber linear dimensions measured during retrospective coronary computed tomographic angiography as compared to transthoracic echocardiography using the American Society of Echocardiography guidelines is not well established. Methods: We performed a review from January 2005 to July 2011 to identify subjects with retrospective electrocardiogram-gated coronary computed tomographic angiography scans for chest pain and transthoracic echocardiography with normal cardiac structures performed within 90 days. Dimensions were manually calculated in both imaging modalities in accordance with the American Society of Echocardiography published guidelines. Left ventricular ejection fraction was calculated on echocardiography manually using the Simpson’s formula and by coronary computed tomographic angiography using the end-systolic and end-diastolic volumes. Results: We reviewed 532 studies, rejected 412 and had 120 cases for review with a median time between studies of 7 days (interquartile range (IQR25,75) = 0–22 days) with no correlation between the measurements made by coronary computed tomographic angiography and transthoracic echocardiography using Bland–Altman analysis. We generated coronary computed tomographic angiography cardiac dimension reference ranges for both genders for our population. Conclusion: Our findings represent a step towards generating cardiac chamber dimensions’ reference ranges for coronary computed tomographic angiography as compared to transthoracic echocardiography in patients with normal cardiac morphology and function using the American Society of Echocardiography guideline measurements that are commonly used by cardiologists. PMID:26770706

  20. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    PubMed

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-10-01

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2  = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2  = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  1. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  2. Computed secondary-particle energy spectra following nonelastic neutron interactions with sup 12 C for E sub n between 15 and 60 MeV: Comparisons of results from two calculational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d{sigma}/dE, following nonelastic neutron interactions with {sup 12}C for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed. 16 refs., 44 figs., 2 tabs.

  3. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  4. Navier-Stokes computations for circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Pulliam, Thomas H.; Jespersen, Dennis C.; Barth, Timothy J.

    1987-01-01

    Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.

  5. A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework

    NASA Astrophysics Data System (ADS)

    Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco

    2017-11-01

    We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.

  6. Navier-Stokes computations for circulation controlled airfoils

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Jesperen, D. C.; Barth, T. J.

    1986-01-01

    Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.

  7. Frictionless contact of aircraft tires

    NASA Technical Reports Server (NTRS)

    Kim, Kyun O.; Tanner, John A.; Noor, Ahmed K.

    1989-01-01

    A computational procedure for the solution of frictionless contact problems of spacecraft tires was developed using a two-dimensional laminated anisotropic shell theory incorporating the effects of variations in material and geometric parameters, transverse shear deformation, and geometric nonlinearities to model the nose-gear tire of a space shuttle. Numerical results are presented for the case when the nose-gear tire is subjected to inflation pressure and pressed against a rigid pavement. The results are compared with experimental results obtained at NASA Langley, demonstrating a high accuracy of the model and the effectiveness of the computational procedure.

  8. Vibronic Coupling Investigation to Compute Phosphorescence Spectra of Pt(II) Complexes.

    PubMed

    Vazart, Fanny; Latouche, Camille; Bloino, Julien; Barone, Vincenzo

    2015-06-01

    The present paper reports a comprehensive quantum mechanical investigation on the luminescence properties of several mono- and dinuclear platinum(II) complexes. The electronic structures and geometric parameters are briefly analyzed together with the absorption bands of all complexes. In all cases agreement with experiment is remarkable. Next, emission (phosphorescence) spectra from the first triplet states have been investigated by comparing different computational approaches and taking into account also vibronic effects. Once again, agreement with experiment is good, especially using unrestricted electronic computations coupled to vibronic contributions. Together with the intrinsic interest of the results, the robustness and generality of the approach open the opportunity for computationally oriented chemists to provide accurate results for the screening of large targets which could be of interest in molecular materials design.

  9. Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)

    2000-01-01

    The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.

  10. Computational Aerodynamic Modeling of Small Quadcopter Vehicles

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.

    2017-01-01

    High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.

  11. Scholarly literature and the press: scientific impact and social perception of physics computing

    NASA Astrophysics Data System (ADS)

    Pia, M. G.; Basaglia, T.; Bell, Z. W.; Dressendorfer, P. V.

    2014-06-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the relationship between the scientific impact and the social perception of HEP physics research versus that of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing via press releases from the major HEP laboratories would be beneficial to the high energy physics community.

  12. Determination of apparent coupling factors for adhesive bonded acrylic plates using SEAL approach

    NASA Astrophysics Data System (ADS)

    Pankaj, Achuthan. C.; Shivaprasad, M. V.; Murigendrappa, S. M.

    2018-04-01

    Apparent coupling loss factors (CLF) and velocity responses has been computed for two lap joined adhesive bonded plates using finite element and experimental statistical energy analysis like approach. A finite element model of the plates has been created using ANSYS software. The statistical energy parameters have been computed using the velocity responses obtained from a harmonic forced excitation analysis. Experiments have been carried out for two different cases of adhesive bonded joints and the results have been compared with the apparent coupling factors and velocity responses obtained from finite element analysis. The results obtained from the studies signify the importance of modeling of adhesive bonded joints in computation of the apparent coupling factors and its further use in computation of energies and velocity responses using statistical energy analysis like approach.

  13. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  14. Toothguide Trainer tests with color vision deficiency simulation monitor.

    PubMed

    Borbély, Judit; Varsányi, Balázs; Fejérdy, Pál; Hermann, Péter; Jakstat, Holger A

    2010-01-01

    The aim of this study was to evaluate whether simulated severe red and green color vision deficiency (CVD) influenced color matching results and to investigate whether training with Toothguide Trainer (TT) computer program enabled better color matching results. A total of 31 color normal dental students participated in the study. Every participant had to pass the Ishihara Test. Participants with a red/green color vision deficiency were excluded. A lecture on tooth color matching was given, and individual training with TT was performed. To measure the individual tooth color matching results in normal and color deficient display modes, the TT final exam was displayed on a calibrated monitor that served as a hardware-based method of simulating protanopy and deuteranopy. Data from the TT final exams were collected in normal and in severe red and green CVD-simulating monitor display modes. Color difference values for each participant in each display mode were computed (∑ΔE(ab)(*)), and the respective means and standard deviations were calculated. The Student's t-test was used in statistical evaluation. Participants made larger ΔE(ab)(*) errors in severe color vision deficient display modes than in the normal monitor mode. TT tests showed significant (p<0.05) difference in the tooth color matching results of severe green color vision deficiency simulation mode compared to normal vision mode. Students' shade matching results were significantly better after training (p=0.009). Computer-simulated severe color vision deficiency mode resulted in significantly worse color matching quality compared to normal color vision mode. Toothguide Trainer computer program improved color matching results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. A fully parallel in time and space algorithm for simulating the electrical activity of a neural tissue.

    PubMed

    Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge

    2016-01-15

    The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Three-Dimensional Computational Model for Flow in an Over-Expanded Nozzle With Porous Surfaces

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, K. S.; Elmiligui, Alaa; Hunter, Craig A.; Massey, Steven J.

    2006-01-01

    A three-Dimensional computational model is used to simulate flow in a non-axisymmetric, convergent-divergent nozzle incorporating porous cavities for shock-boundary layer interaction control. The nozzle has an expansion ratio (exit area/throat area) of 1.797 and a design nozzle pressure ratio of 8.78. Flow fields for the baseline nozzle (no porosity) and for the nozzle with porous surfaces of 10% openness are computed for Nozzle Pressure Ratio (NPR) varying from 1.29 to 9.54. The three dimensional computational results indicate that baseline (no porosity) nozzle performance is dominated by unstable, shock-induced, boundary-layer separation at over-expanded conditions. For NPR less than or equal to 1.8, the separation is three dimensional, somewhat unsteady, and confined to a bubble (with partial reattachment over the nozzle flap). For NPR greater than or equal to 2.0, separation is steady and fully detached, and becomes more two dimensional as NPR increased. Numerical simulation of porous configurations indicates that a porous patch is capable of controlling off design separation in the nozzle by either alleviating separation or by encouraging stable separation of the exhaust flow. In the present paper, computational simulation results, wall centerline pressure, mach contours, and thrust efficiency ratio are presented, discussed and compared with experimental data. Results indicate that comparisons are in good agreement with experimental data. The three-dimensional simulation improves the comparisons for over-expanded flow conditions as compared with two-dimensional assumptions.

  17. Creating a Computer Adaptive Test Version of the Late-Life Function & Disability Instrument

    PubMed Central

    Jette, Alan M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Moed, Richard

    2009-01-01

    Background This study applied Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies to develop a prototype function and disability assessment instrument for use in aging research. Herein, we report on the development of the CAT version of the Late-Life Function & Disability instrument (Late-Life FDI) and evaluate its psychometric properties. Methods We employed confirmatory factor analysis, IRT methods, validation, and computer simulation analyses of data collected from 671 older adults residing in residential care facilities. We compared accuracy, precision, and sensitivity to change of scores from CAT versions of two Late-Life FDI scales with scores from the fixed-form instrument. Score estimates from the prototype CAT versus the original instrument were compared in a sample of 40 older adults. Results Distinct function and disability domains were identified within the Late-Life FDI item bank and used to construct two prototype CAT scales. Using retrospective data, scores from computer simulations of the prototype CAT scales were highly correlated with scores from the original instrument. The results of computer simulation, accuracy, precision, and sensitivity to change of the CATs closely approximated those of the fixed-form scales, especially for the 10- or 15-item CAT versions. In the prospective study each CAT was administered in less than 3 minutes and CAT scores were highly correlated with scores generated from the original instrument. Conclusions CAT scores of the Late-Life FDI were highly comparable to those obtained from the full-length instrument with a small loss in accuracy, precision, and sensitivity to change. PMID:19038841

  18. Energy consumption program: A computer model simulating energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  19. Hybrid rocket engine, theoretical model and experiment

    NASA Astrophysics Data System (ADS)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  20. CMG-biotools, a free workbench for basic comparative microbial genomics.

    PubMed

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training.

  1. Analysis of Square Cup Deep-Drawing Test of Pure Titanium

    NASA Astrophysics Data System (ADS)

    Ogawa, Takaki; Ma, Ninshu; Ueyama, Minoru; Harada, Yasunori

    2016-08-01

    The prediction of formability of titunium is more difficult than steels since its strong anisotropy. If computer simulation can estimate the formability of titanium, we can select the optimal forming conditions. The purpose of this study was to acquire knowledge for the formability prediction by the computer simulation of the square cup deep-drawing of pure titanium. In this paper, the results of FEM analsis of pure titanium were compared with the experimental results to examine the analysis validity. We analyzed the formability of deepdrawing square cup of titanium by the FEM using solid elements. Compared the analysis results with the experimental results such as the forming shape, the punch load, and the thickness, the validity was confirmed. Further, through analyzing the change of the thickness around the forming corner, it was confirmed that the thickness increased to its maximum value during forming process at the stroke of 35mm more than the maximum stroke.

  2. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.

    2014-01-01

    This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio and number of control surfaces. A doublet lattice approach is taken to compute generalized forces. A rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. Although, all parameters can be easily modified if desired.The focus of this paper is on tool presentation, verification and validation. This process is carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool. Therefore the flutter speed and frequency for a clamped plate are computed using V-g and V-f analysis. The computational results are compared to a previously published computational analysis and wind tunnel results for the same structure. Finally a case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to V-g and V-f analysis. This also includes the analysis of the model in response to a 1-cos gust.

  3. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.

    2015-01-01

    This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this paper is on tool presentation, verification, and validation. These processes are carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  4. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.

    2015-01-01

    This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  5. Fan Flutter Computations Using the Harmonic Balance Method

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

    2009-01-01

    An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

  6. Heat transfer, thermal stress analysis and the dynamic behaviour of high power RF structures. [MARC and SUPERFISH codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKeown, J.; Labrie, J.P.

    1983-08-01

    A general purpose finite element computer code called MARC is used to calculate the temperature distribution and dimensional changes in linear accelerator rf structures. Both steady state and transient behaviour are examined with the computer model. Combining results from MARC with the cavity evaluation computer code SUPERFISH, the static and dynamic behaviour of a structure under power is investigated. Structure cooling is studied to minimize loss in shunt impedance and frequency shifts during high power operation. Results are compared with an experimental test carried out on a cw 805 MHz on-axis coupled structure at an energy gradient of 1.8 MeV/m.more » The model has also been used to compare the performance of on-axis and coaxial structures and has guided the mechanical design of structures suitable for average gradients in excess of 2.0 MeV/m at 2.45 GHz.« less

  7. The Effectiveness of Gaze-Contingent Control in Computer Games.

    PubMed

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  8. Comparison of computer-assisted surgery with conventional technique for the treatment of axial distal phalanx fractures in horses: an in vitro study.

    PubMed

    Andritzky, Juliane; Rossol, Melanie; Lischer, Christoph; Auer, Joerg A

    2005-01-01

    To compare the precision obtained with computer-assisted screw insertion for treatment of mid-sagittal articular fractures of the distal phalanx (P3) with results achieved with a conventional technique. In vitro experimental study. Thirty-two cadaveric equine limbs. Four groups of 8 limbs were studied. Either 1 or 2 screws were inserted perpendicular to an imaginary axial fracture of P3 using computer-assisted surgery (CAS) or conventional technique. Screw insertion time, predetermined screw length, inserted screw length, fit of the screw, and errors in placement were recorded. CAS technique took 15-20 minutes longer but resulted in greater precision of screw length and placement compared with the conventional technique. Improved precision in screw insertion with CAS makes insertion of 2 screws possible for repair of mid-sagittal P3 fractures. CAS although expensive improves precision in screw insertion into P3 and consequently should yield improved clinical outcome.

  9. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  10. Comparative simulation of switching regimes of magnetic explosion generators by copper and aluminum magnetodynamic current breakers taking into account elastoplastic properties of materials

    NASA Astrophysics Data System (ADS)

    Bazanov, A. A.; Ivanovskii, A. V.; Panov, A. I.; Samodolov, A. V.; Sokolov, S. S.; Shaidullin, V. Sh.

    2017-06-01

    We report on the results of the computer simulation of the operation of magnetodynamic break switches used as the second stage of current pulse formation in magnetic explosion generators. The simulation was carried out under the conditions when the magnetic field energy density on the surface of the switching conductor as a function of the current through it was close to but still did not exceed the critical value typical of the beginning of electric explosion. In the computational model, we used the parameters of experimentally tested sample of a coil magnetic explosion generator that can store energy of up to 2.7 MJ in the inductive storage circuit and equipped with a primary explosion stage of the current pulse formation. It has been shown that the choice of the switching conductor material, as well as its elastoplastic properties, considerably affects the breaker speed. Comparative results of computer simulation for copper and aluminum have been considered.

  11. The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Overman, Andrea L.

    1988-01-01

    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.

  12. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  13. Navier-Stokes calculations for 3D gaseous fuel injection with data comparisons

    NASA Technical Reports Server (NTRS)

    Fuller, E. J.; Walters, R. W.

    1991-01-01

    Results from a computational study and experiments designed to further expand the knowledge of gaseous injection into supersonic cross-flows are presented. Experiments performed at Mach 6 included several cases of gaseous helium injection with low transverse angles and injection with low transverse angles coupled with a low yaw angle. Both experimental and computational data confirm that injector yaw has an adverse effect on the helium core decay rate. An array of injectors is found to give higher penetration into the freestream without loss of core injectant decay as compared to a single injector. Lateral diffusion plays a major role in lateral plume spreading, eddy viscosity, injectant plume, and injectant-freestream mixing. Grid refinement makes it possible to capture the gradients in the streamwise direction accurately and to vastly improve the data comparisons. Computational results for a refined grid are found to compare favorably with experimental data on injectant overall and core penetration provided laminar lateral diffusion was taken into account using the modified Baldwin-Lomax turbulence model.

  14. Simulations of noble gases adsorbed on graphene

    NASA Astrophysics Data System (ADS)

    Maiga, Sidi; Gatica, Silvina

    2014-03-01

    We present results of Grand Canonical Monte Carlo simulations of adsorption of Kr, Ar and Xe on a suspended graphene sheet. We compute the adsorbate-adsorbate interaction by a Lennard-Jones potential. We adopt a hybrid model for the graphene-adsorbate force; in the hybrid model, the potential interaction with the nearest carbon atoms (within a distance rnn) is computed with an atomistic pair potential Ua; for the atoms at r>rnn, we compute the interaction energy as a continuous integration over a carbon uniform sheet with the density of graphene. For the atomistic potential Ua, we assume the anisotropic LJ potential adapted from the graphite-He interaction proposed by Cole et.al. This interaction includes the anisotropy of the C atoms on graphene, which originates in the anisotropic π-bonds. The adsorption isotherms, energy and structure of the layer are obtained and compared with experimental results. We also compare with the adsorption on graphite and carbon nanotubes. This research was supported by NSF/PRDM (Howard University) and NSF (DMR 1006010).

  15. Virtual Reality and Computer-Enhanced Training Devices Equally Improve Laparoscopic Surgical Skill in Novices

    PubMed Central

    Kanumuri, Prathima; Ganai, Sabha; Wohaibi, Eyad M.; Bush, Ronald W.; Grow, Daniel R.

    2008-01-01

    Background: The study aim was to compare the effectiveness of virtual reality and computer-enhanced video-scopic training devices for training novice surgeons in complex laparoscopic skills. Methods: Third-year medical students received instruction on laparoscopic intracorporeal suturing and knot tying and then underwent a pretraining assessment of the task using a live porcine model. Students were then randomized to objectives-based training on either the virtual reality (n=8) or computer-enhanced (n=8) training devices for 4 weeks, after which the assessment was repeated. Results: Posttraining performance had improved compared with pretraining performance in both task completion rate (94% versus 18%; P<0.001*) and time [181±58 (SD) versus 292±24*]. Performance of the 2 groups was comparable before and after training. Of the subjects, 88% thought that haptic cues were important in simulators. Both groups agreed that their respective training systems were effective teaching tools, but computer-enhanced device trainees were more likely to rate their training as representative of reality (P<0.01). Conclusions: Training on virtual reality and computer-enhanced devices had equivalent effects on skills improvement in novices. Despite the perception that haptic feedback is important in laparoscopic simulation training, its absence in the virtual reality device did not impede acquisition of skill. PMID:18765042

  16. Applications of a General Finite-Difference Method for Calculating Bending Deformations of Solid Plates

    NASA Technical Reports Server (NTRS)

    Walton, William C., Jr.

    1960-01-01

    This paper reports the findings of an investigation of a finite - difference method directly applicable to calculating static or simple harmonic flexures of solid plates and potentially useful in other problems of structural analysis. The method, which was proposed in doctoral thesis by John C. Houbolt, is based on linear theory and incorporates the principle of minimum potential energy. Full realization of its advantages requires use of high-speed computing equipment. After a review of Houbolt's method, results of some applications are presented and discussed. The applications consisted of calculations of the natural modes and frequencies of several uniform-thickness cantilever plates and, as a special case of interest, calculations of the modes and frequencies of the uniform free-free beam. Computed frequencies and nodal patterns for the first five or six modes of each plate are compared with existing experiments, and those for one plate are compared with another approximate theory. Beam computations are compared with exact theory. On the basis of the comparisons it is concluded that the method is accurate and general in predicting plate flexures, and additional applications are suggested. An appendix is devoted t o computing procedures which evolved in the progress of the applications and which facilitate use of the method in conjunction with high-speed computing equipment.

  17. Simulation of size-exclusion chromatography distribution coefficients of comb-shaped molecules in spherical pores comparison of simulation and experiment.

    PubMed

    Radke, Wolfgang

    2004-03-05

    Simulations of the distribution coefficients of linear polymers and regular combs with various spacings between the arms have been performed. The distribution coefficients were plotted as a function of the number of segments in order to compare the size exclusion chromatography (SEC)-elution behavior of combs relative to linear molecules. By comparing the simulated SEC-calibration curves it is possible to predict the elution behavior of comb-shaped polymers relative to linear ones. In order to compare the results obtained by computer simulations with experimental data, a variety of comb-shaped polymers varying in side chain length, spacing between the side chains and molecular weights of the backbone were analyzed by SEC with light-scattering detection. It was found that the computer simulations could predict the molecular weights of linear molecules having the same retention volume with an accuracy of about 10%, i.e. the error in the molecular weight obtained by calculating the molecular weight of the comb-polymer based on a calibration curve constructed using linear standards and the results of the computer simulations are of the same magnitude as the experimental error of absolute molecular weight determination.

  18. Computational study of performance characteristics for truncated conical aerospike nozzles

    NASA Astrophysics Data System (ADS)

    Nair, Prasanth P.; Suryan, Abhilash; Kim, Heuy Dong

    2017-12-01

    Aerospike nozzles are advanced rocket nozzles that can maintain its aerodynamic efficiency over a wide range of altitudes. It belongs to class of altitude compensating nozzles. A vehicle with an aerospike nozzle uses less fuel at low altitudes due to its altitude adaptability, where most missions have the greatest need for thrust. Aerospike nozzles are better suited to Single Stage to Orbit (SSTO) missions compared to conventional nozzles. In the current study, the flow through 20% and 40% aerospike nozzle is analyzed in detail using computational fluid dynamics technique. Steady state analysis with implicit formulation is carried out. Reynolds averaged Navier-Stokes equations are solved with the Spalart-Allmaras turbulence model. The results are compared with experimental results from previous work. The transition from open wake to closed wake happens in lower Nozzle Pressure Ratio for 20% as compared to 40% aerospike nozzle.

  19. Selected computations of transonic cavity flows

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.

    1993-01-01

    An efficient diagonal scheme implemented in an overset mesh framework has permitted the analysis of geometrically complex cavity flows via the Reynolds averaged Navier-Stokes equations. Use of rapid hyperbolic and algebraic grid methods has allowed simple specification of critical turbulent regions with an algebraic turbulence model. Comparisons between numerical and experimental results are made in two dimensions for the following problems: a backward-facing step; a resonating cavity; and two quieted cavity configurations. In three-dimensions the flow about three early concepts of the stratospheric Observatory For Infrared Astronomy (SOFIA) are compared to wind-tunnel data. Shedding frequencies of resolved shear layer structures are compared against experiment for the quieted cavities. The results demonstrate the progress of computational assessment of configuration safety and performance.

  20. A low-complexity geometric bilateration method for localization in Wireless Sensor Networks and its comparison with Least-Squares methods.

    PubMed

    Cota-Ruiz, Juan; Rosiles, Jose-Gerardo; Sifuentes, Ernesto; Rivas-Perea, Pablo

    2012-01-01

    This research presents a distributed and formula-based bilateration algorithm that can be used to provide initial set of locations. In this scheme each node uses distance estimates to anchors to solve a set of circle-circle intersection (CCI) problems, solved through a purely geometric formulation. The resulting CCIs are processed to pick those that cluster together and then take the average to produce an initial node location. The algorithm is compared in terms of accuracy and computational complexity with a Least-Squares localization algorithm, based on the Levenberg-Marquardt methodology. Results in accuracy vs. computational performance show that the bilateration algorithm is competitive compared with well known optimized localization algorithms.

  1. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  2. Self-Administered Cued Naming Therapy: A Single-Participant Investigation of a Computer-Based Therapy Program Replicated in Four Cases

    ERIC Educational Resources Information Center

    Ramsberger, Gail; Marie, Basem

    2007-01-01

    Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…

  3. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Computational methods for diffusion-influenced biochemical reactions.

    PubMed

    Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G

    2007-08-01

    We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/

  5. Computational comparison of quantum-mechanical models for multistep direct reactions

    NASA Astrophysics Data System (ADS)

    Koning, A. J.; Akkermans, J. M.

    1993-02-01

    We have carried out a computational comparison of all existing quantum-mechanical models for multistep direct (MSD) reactions. The various MSD models, including the so-called Feshbach-Kerman-Koonin, Tamura-Udagawa-Lenske and Nishioka-Yoshida-Weidenmüller models, have been implemented in a single computer system. All model calculations thus use the same set of parameters and the same numerical techniques; only one adjustable parameter is employed. The computational results have been compared with experimental energy spectra and angular distributions for several nuclear reactions, namely, 90Zr(p,p') at 80 MeV, 209Bi(p,p') at 62 MeV, and 93Nb(n,n') at 25.7 MeV. In addition, the results have been compared with the Kalbach systematics and with semiclassical exciton model calculations. All quantum MSD models provide a good fit to the experimental data. In addition, they reproduce the systematics very well and are clearly better than semiclassical model calculations. We furthermore show that the calculated predictions do not differ very strongly between the various quantum MSD models, leading to the conclusion that the simplest MSD model (the Feshbach-Kerman-Koonin model) is adequate for the analysis of experimental data.

  6. A comparative study of two codes with an improved two-equation turbulence model for predicting jet plumes

    NASA Technical Reports Server (NTRS)

    Balakrishnan, L.; Abdol-Hamid, Khaled S.

    1992-01-01

    Compressible jet plumes were studied using a two-equation turbulence model. A space marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that extending the space marching procedure for solving supersonic/subsonic mixing problems can be stable, efficient and accurate. Moreover, a newly developed correction for compressible dissipation has been verified in fully expanded and underexpanded jet plumes. For a sonic jet plume, no improvement in results over the standard two-equation model was seen. However for a supersonic jet plume, the correction due to compressible dissipation successfully predicted the reduced spreading rate of the jet compared to the sonic case. The computed results were generally in good agreement with the experimental data.

  7. Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.

  8. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  9. Assessing Gait Impairments Based on Auto-Encoded Patterns of Mahalanobis Distances from Consecutive Steps.

    PubMed

    Muñoz-Organero, Mario; Davies, Richard; Mawson, Sue

    2017-01-01

    Insole pressure sensors capture the force distribution patterns during the stance phase while walking. By comparing patterns obtained from healthy individuals to patients suffering different medical conditions based on a given similarity measure, automatic impairment indexes can be computed in order to help in applications such as rehabilitation. This paper uses the data sensed from insole pressure sensors for a group of healthy controls to train an auto-encoder using patterns of stochastic distances in series of consecutive steps while walking at normal speeds. Two experiment groups are compared to the healthy control group: a group of patients suffering knee pain and a group of post-stroke survivors. The Mahalanobis distance is computed for every single step by each participant compared to the entire dataset sensed from healthy controls. The computed distances for consecutive steps are fed into the previously trained autoencoder and the average error is used to assess how close the walking segment is to the autogenerated model from healthy controls. The results show that automatic distortion indexes can be used to assess each participant as compared to normal patterns computed from healthy controls. The stochastic distances observed for the group of stroke survivors are bigger than those for the people with knee pain.

  10. Computer usage and national energy consumption: Results from a field-metering study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desroches, Louis-Benoit; Fuchs, Heidi; Greenblatt, Jeffery

    The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Baymore » Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power of power supplies to computing needs, and improving the efficiency of individual components.« less

  11. PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.

    PubMed

    MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S

    2005-06-01

    Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.

  12. Investigation of advancing front method for generating unstructured grid

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1992-01-01

    The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.

  13. Energy Finite Element Analysis for Computing the High Frequency Vibration of the Aluminum Testbed Cylinder and Correlating the Results to Test Data

    NASA Technical Reports Server (NTRS)

    Vlahopoulos, Nickolas

    2005-01-01

    The Energy Finite Element Analysis (EFEA) is a finite element based computational method for high frequency vibration and acoustic analysis. The EFEA solves with finite elements governing differential equations for energy variables. These equations are developed from wave equations. Recently, an EFEA method for computing high frequency vibration of structures either in vacuum or in contact with a dense fluid has been presented. The presence of fluid loading has been considered through added mass and radiation damping. The EFEA developments were validated by comparing EFEA results to solutions obtained by very dense conventional finite element models and solutions from classical techniques such as statistical energy analysis (SEA) and the modal decomposition method for bodies of revolution. EFEA results have also been compared favorably with test data for the vibration and the radiated noise generated by a large scale submersible vehicle. The primary variable in EFEA is defined as the time averaged over a period and space averaged over a wavelength energy density. A joint matrix computed from the power transmission coefficients is utilized for coupling the energy density variables across any discontinuities, such as change of plate thickness, plate/stiffener junctions etc. When considering the high frequency vibration of a periodically stiffened plate or cylinder, the flexural wavelength is smaller than the interval length between two periodic stiffeners, therefore the stiffener stiffness can not be smeared by computing an equivalent rigidity for the plate or cylinder. The periodic stiffeners must be regarded as coupling components between periodic units. In this paper, Periodic Structure (PS) theory is utilized for computing the coupling joint matrix and for accounting for the periodicity characteristics.

  14. Computational approach to integrate 3D X-ray microtomography and NMR data

    NASA Astrophysics Data System (ADS)

    Lucas-Oliveira, Everton; Araujo-Ferreira, Arthur G.; Trevizan, Willian A.; Fortulan, Carlos A.; Bonagamba, Tito J.

    2018-07-01

    Nowadays, most of the efforts in NMR applied to porous media are dedicated to studying the molecular fluid dynamics within and among the pores. These analyses have a higher complexity due to morphology and chemical composition of rocks, besides dynamic effects as restricted diffusion, diffusional coupling, and exchange processes. Since the translational nuclear spin diffusion in a confined geometry (e.g. pores and fractures) requires specific boundary conditions, the theoretical solutions are restricted to some special problems and, in many cases, computational methods are required. The Random Walk Method is a classic way to simulate self-diffusion along a Digital Porous Medium. Bergman model considers the magnetic relaxation process of the fluid molecules by including a probability rate of magnetization survival under surface interactions. Here we propose a statistical approach to correlate surface magnetic relaxivity with the computational method applied to the NMR relaxation in order to elucidate the relationship between simulated relaxation time and pore size of the Digital Porous Medium. The proposed computational method simulates one- and two-dimensional NMR techniques reproducing, for example, longitudinal and transverse relaxation times (T1 and T2, respectively), diffusion coefficients (D), as well as their correlations. For a good approximation between the numerical and experimental results, it is necessary to preserve the complexity of translational diffusion through the microstructures in the digital rocks. Therefore, we use Digital Porous Media obtained by 3D X-ray microtomography. To validate the method, relaxation times of ideal spherical pores were obtained and compared with the previous determinations by the Brownstein-Tarr model, as well as the computational approach proposed by Bergman. Furthermore, simulated and experimental results of synthetic porous media are compared. These results make evident the potential of computational physics in the analysis of the NMR data for complex porous materials.

  15. Nurses' attitudes towards computers: cross sectional questionnaire study.

    PubMed

    Brumini, Gordan; Kovic, Ivor; Zombori, Dejvid; Lulic, Ileana; Petrovecki, Mladen

    2005-02-01

    To estimate the attitudes of hospital nurses towards computers and the influence of gender, age, education, and computer usage on these attitudes. The study was conducted in two Croatian hospitals where integrated hospital information system is being implemented. There were 1,081 nurses surveyed by an anonymous questionnaire consisting of 8 questions about demographic data, education, and computer usage, and 30 statements on attitudes towards computers. The statements were adapted to a Likert type scale. Differences in attitudes towards computers were compared using one-way ANOVA and Tukey-b post-hoc test. The total score was 120+/-15 (mean+/-standard deviation) out of maximal 150. Nurses younger than 30 years had a higher total score than those older than 30 years (124+/-13 vs 119+/-16 for 30-39 age groups and 117+/-15 for>39 age groups, P<0.001). Nurses with a bachelor's degree (119+/-16 vs 122+/-14, P=0.002) and nurses who had attended computer science courses had a higher total score compared to the others (124+/-13 vs 118+/-16, P<0.001). Nurses using computers more than 5 hours per week had higher total score than those who used computers less than 5 hours (127+/-13 vs 124+/-12 for 1-5 h and and 119+/-14 for <1 hour per day, P<0.001, post-hoc test). Nurses in general have positive attitudes towards computers. These results are important for the planning and implementing an integrated hospital information system.

  16. Toward GPGPU accelerated human electromechanical cardiac simulations

    PubMed Central

    Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David

    2014-01-01

    In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492

  17. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  18. Measurements and computational analysis of heat transfer and flow in a simulated turbine blade internal cooling passage

    NASA Technical Reports Server (NTRS)

    Russell, Louis M.; Thurman, Douglas R.; Simonyi, Patricia S.; Hippensteele, Steven A.; Poinsatte, Philip E.

    1993-01-01

    Visual and quantitative information was obtained on heat transfer and flow in a branched-duct test section that had several significant features of an internal cooling passage of a turbine blade. The objective of this study was to generate a set of experimental data that could be used to validate computer codes for internal cooling systems. Surface heat transfer coefficients and entrance flow conditions were measured at entrance Reynolds numbers of 45,000, 335,000, and 726,000. The heat transfer data were obtained using an Inconel heater sheet attached to the surface and coated with liquid crystals. Visual and quantitative flow field results using particle image velocimetry were also obtained for a plane at mid channel height for a Reynolds number of 45,000. The flow was seeded with polystyrene particles and illuminated by a laser light sheet. Computational results were determined for the same configurations and at matching Reynolds numbers; these surface heat transfer coefficients and flow velocities were computed with a commercially available code. The experimental and computational results were compared. Although some general trends did agree, there were inconsistencies in the temperature patterns as well as in the numerical results. These inconsistencies strongly suggest the need for further computational studies on complicated geometries such as the one studied.

  19. Computation of the three-dimensional medial surface dynamics of the vocal folds.

    PubMed

    Döllinger, Michael; Berry, David A

    2006-01-01

    To increase our understanding of pathological and healthy voice production, quantitative measurement of the medial surface dynamics of the vocal folds is significant, albeit rarely performed because of the inaccessibility of the vocal folds. Using an excised hemilarynx methodology, a new calibration technique, herein referred to as the linear approximate (LA) method, was introduced to compute the three-dimensional coordinates of fleshpoints along the entire medial surface of the vocal fold. The results were compared with results from the direct linear transform. An associated error estimation was presented, demonstrating the improved accuracy of the new method. A test on real data was reported including computation of quantitative measurements of vocal fold dynamics.

  20. Laser Beam Propagation Through Inhomogeneous Media with Shock-Like Profiles: Modeling and Computing

    NASA Technical Reports Server (NTRS)

    Adamovsky, Grigory; Ida, Nathan

    1997-01-01

    Wave propagation in inhomogeneous media has been studied for such diverse applications as propagation of radiowaves in atmosphere, light propagation through thin films and in inhomogeneous waveguides, flow visualization, and others. In recent years an increased interest has been developed in wave propagation through shocks in supersonic flows. Results of experiments conducted in the past few years has shown such interesting phenomena as a laser beam splitting and spreading. The paper describes a model constructed to propagate a laser beam through shock-like inhomogeneous media. Numerical techniques are presented to compute the beam through such media. The results of computation are presented, discussed, and compared with experimental data.

  1. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  2. Original analytic solution of a half-bridge modelled as a statically indeterminate system

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra; Barhalescu, Mihaela

    2016-12-01

    The paper presents an original computer based analytical model of a half-bridge belonging to a circular settling tank. The primary unknown is computed using the force method, the coefficients of the canonical equation being calculated using either the discretization of the bending moment diagram in trapezoids, or using the relations specific to the polygons. A second algorithm based on the method of initial parameters is also presented. Analyzing the new solution we came to the conclusion that most of the computer code developed for other model may be reused. The results are useful to evaluate the behavior of the structure and to compare with the results of the finite element models.

  3. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  4. Software designs of image processing tasks with incremental refinement of computation.

    PubMed

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  5. RotCFD Analysis of the AH-56 Cheyenne Hub Drag

    NASA Technical Reports Server (NTRS)

    Solis, Eduardo; Bass, Tal A.; Keith, Matthew D.; Oppenheim, Rebecca T.; Runyon, Bryan T.; Veras-Alba, Belen

    2016-01-01

    In 2016, the U.S. Army Aviation Development Directorate (ADD) conducted tests in the U.S. Army 7- by 10- Foot Wind Tunnel at NASA Ames Research Center of a nonrotating 2/5th-scale AH-56 rotor hub. The objective of the tests was to determine how removing the mechanical control gyro affected the drag. Data for the lift, drag, and pitching moment were recorded for the 4-bladed rotor hub in various hardware configurations, azimuth angles, and angles of attack. Numerical simulations of a selection of the configurations and orientations were then performed, and the results were compared with the test data. To generate the simulation results, the hardware configurations were modeled using Creo and Rhinoceros 5, three-dimensional surface modeling computer-aided design (CAD) programs. The CAD model was imported into Rotorcraft Computational Fluid Dynamics (RotCFD), a computational fluid dynamics (CFD) tool used for analyzing rotor flow fields. RotCFD simulation results were compared with the experimental results of three hardware configurations at two azimuth angles, two angles of attack, and with and without wind tunnel walls. The results help validate RotCFD as a tool for analyzing low-drag rotor hub designs for advanced high-speed rotorcraft concepts. Future work will involve simulating additional hub geometries to reduce drag or tailor to other desired performance levels.

  6. Blind topological measurement-based quantum computation

    PubMed Central

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-01-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf–Harrington–Goyal scheme. The error threshold of our scheme is 4.3×10−3, which is comparable to that (7.5×10−3) of non-blind topological quantum computation. As the error per gate of the order 10−3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach. PMID:22948818

  7. Flexible Launch Vehicle Stability Analysis Using Steady and Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    Launch vehicles frequently experience a reduced stability margin through the transonic Mach number range. This reduced stability margin can be caused by the aerodynamic undamping one of the lower-frequency flexible or rigid body modes. Analysis of the behavior of a flexible vehicle is routinely performed with quasi-steady aerodynamic line loads derived from steady rigid aerodynamics. However, a quasi-steady aeroelastic stability analysis can be unconservative at the critical Mach numbers, where experiment or unsteady computational aeroelastic analysis show a reduced or even negative aerodynamic damping.Amethod of enhancing the quasi-steady aeroelastic stability analysis of a launch vehicle with unsteady aerodynamics is developed that uses unsteady computational fluid dynamics to compute the response of selected lower-frequency modes. The response is contained in a time history of the vehicle line loads. A proper orthogonal decomposition of the unsteady aerodynamic line-load response is used to reduce the scale of data volume and system identification is used to derive the aerodynamic stiffness, damping, and mass matrices. The results are compared with the damping and frequency computed from unsteady computational aeroelasticity and from a quasi-steady analysis. The results show that incorporating unsteady aerodynamics in this way brings the enhanced quasi-steady aeroelastic stability analysis into close agreement with the unsteady computational aeroelastic results.

  8. The Role of Computer-Assisted Technology in Post-Traumatic Orbital Reconstruction: A PRISMA-driven Systematic Review.

    PubMed

    Wan, Kelvin H; Chong, Kelvin K L; Young, Alvin L

    2015-12-08

    Post-traumatic orbital reconstruction remains a surgical challenge and requires careful preoperative planning, sound anatomical knowledge and good intraoperative judgment. Computer-assisted technology has the potential to reduce error and subjectivity in the management of these complex injuries. A systematic review of the literature was conducted to explore the emerging role of computer-assisted technologies in post-traumatic orbital reconstruction, in terms of functional and safety outcomes. We searched for articles comparing computer-assisted procedures with conventional surgery and studied outcomes on diplopia, enophthalmos, or procedure-related complications. Six observational studies with 273 orbits at a mean follow-up of 13 months were included. Three out of 4 studies reported significantly fewer patients with residual diplopia in the computer-assisted group, while only 1 of the 5 studies reported better improvement in enophthalmos in the assisted group. Types and incidence of complications were comparable. Study heterogeneities limiting statistical comparison by meta-analysis will be discussed. This review highlights the scarcity of data on computer-assisted technology in orbital reconstruction. The result suggests that computer-assisted technology may offer potential advantage in treating diplopia while its role remains to be confirmed in enophthalmos. Additional well-designed and powered randomized controlled trials are much needed.

  9. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS

    NASA Astrophysics Data System (ADS)

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.

    2017-06-01

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  10. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E

    2017-06-21

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  11. A Non-Cut Cell Immersed Boundary Method for Use in Icing Simulations

    NASA Technical Reports Server (NTRS)

    Sarofeen, Christian M.; Noack, Ralph W.; Kreeger, Richard E.

    2013-01-01

    This paper describes a computational fluid dynamic method used for modelling changes in aircraft geometry due to icing. While an aircraft undergoes icing, the accumulated ice results in a geometric alteration of the aerodynamic surfaces. In computational simulations for icing, it is necessary that the corresponding geometric change is taken into consideration. The method used, herein, for the representation of the geometric change due to icing is a non-cut cell Immersed Boundary Method (IBM). Computational cells that are in a body fitted grid of a clean aerodynamic geometry that are inside a predicted ice formation are identified. An IBM is then used to change these cells from being active computational cells to having properties of viscous solid bodies. This method has been implemented in the NASA developed node centered, finite volume computational fluid dynamics code, FUN3D. The presented capability is tested for two-dimensional airfoils including a clean airfoil, an iced airfoil, and an airfoil in harmonic pitching motion about its quarter chord. For these simulations velocity contours, pressure distributions, coefficients of lift, coefficients of drag, and coefficients of pitching moment about the airfoil's quarter chord are computed and used for comparison against experimental results, a higher order panel method code with viscous effects, XFOIL, and the results from FUN3D's original solution process. The results of the IBM simulations show that the accuracy of the IBM compares satisfactorily with the experimental results, XFOIL results, and the results from FUN3D's original solution process.

  12. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Lumpkin, Forrest E., III; Gati, Frank; Yuko, James R.; Motil, Brian J.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module was performed using MSC Patran/Pthermal. The obtained temperature results showed that thermal protection is necessary because of significant heating from the plume.

  13. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    PubMed

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  14. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  15. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  16. Real-time slicing algorithm for Stereolithography (STL) CAD model applied in additive manufacturing industry

    NASA Astrophysics Data System (ADS)

    Adnan, F. A.; Romlay, F. R. M.; Shafiq, M.

    2018-04-01

    Owing to the advent of the industrial revolution 4.0, the need for further evaluating processes applied in the additive manufacturing application particularly the computational process for slicing is non-trivial. This paper evaluates a real-time slicing algorithm for slicing an STL formatted computer-aided design (CAD). A line-plane intersection equation was applied to perform the slicing procedure at any given height. The application of this algorithm has found to provide a better computational time regardless the number of facet in the STL model. The performance of this algorithm is evaluated by comparing the results of the computational time for different geometry.

  17. Design of Rail Instrumentation for Wind Tunnel Sonic Boom Measurements and Computational-Experimental Comparisons

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Elmiligui, A.; Aftosmis, M.; Morgenstern, J.; Durston, D.; Thomas, S.

    2012-01-01

    An innovative pressure rail concept for wind tunnel sonic boom testing of modern aircraft configurations with very low overpressures was designed with an adjoint-based solution-adapted Cartesian grid method. The computational method requires accurate free-air calculations of a test article as well as solutions modeling the influence of rail and tunnel walls. Specialized grids for accurate Euler and Navier-Stokes sonic boom computations were used on several test articles including complete aircraft models with flow-through nacelles. The computed pressure signatures are compared with recent results from the NASA 9- x 7-foot Supersonic Wind Tunnel using the advanced rail design.

  18. Finite-difference solution for turbulent swirling compressible flow in axisymmetric ducts with struts

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.

    1974-01-01

    A finite-difference procedure for computing the turbulent, swirling, compressible flow in axisymmetric ducts is described. Arbitrary distributions of heat and mass transfer at the boundaries can be treated, and the effects of struts, inlet guide vanes, and flow straightening vanes can be calculated. The calculation procedure is programmed in FORTRAN 4 and has operated successfully on the UNIVAC 1108, IBM 360, and CDC 6600 computers. The analysis which forms the basis of the procedure, a detailed description of the computer program, and the input/output formats are presented. The results of sample calculations performed with the computer program are compared with experimental data.

  19. Optoelectronic Reservoir Computing

    PubMed Central

    Paquot, Y.; Duport, F.; Smerieri, A.; Dambre, J.; Schrauwen, B.; Haelterman, M.; Massar, S.

    2012-01-01

    Reservoir computing is a recently introduced, highly efficient bio-inspired approach for processing time dependent data. The basic scheme of reservoir computing consists of a non linear recurrent dynamical system coupled to a single input layer and a single output layer. Within these constraints many implementations are possible. Here we report an optoelectronic implementation of reservoir computing based on a recently proposed architecture consisting of a single non linear node and a delay line. Our implementation is sufficiently fast for real time information processing. We illustrate its performance on tasks of practical importance such as nonlinear channel equalization and speech recognition, and obtain results comparable to state of the art digital implementations. PMID:22371825

  20. A hybridized method for computing high-Reynolds-number hypersonic flow about blunt bodies

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Hamilton, H. H., II

    1979-01-01

    A hybridized method for computing the flow about blunt bodies is presented. In this method the flow field is split into its viscid and inviscid parts. The forebody flow field about a parabolic body is computed. For the viscous solution, the Navier-Stokes equations are solved on orthogonal parabolic coordinates using explicit finite differencing. The inviscid flow is determined by using a Moretti type scheme in which the Euler equations are solved, using explicit finite differences, on a nonorthogonal coordinate system which uses the bow shock as an outer boundary. The two solutions are coupled along a common data line and are marched together in time until a converged solution is obtained. Computed results, when compared with experimental and analytical results, indicate the method works well over a wide range of Reynolds numbers and Mach numbers.

  1. Computational Investigation of Tangential Slot Blowing on a Generic Chined Forebody

    NASA Technical Reports Server (NTRS)

    Agosta-Greenman, Roxana M.; Gee, Ken; Cummings, Russell M.; Schiff, Lewis B.

    1995-01-01

    The effect of tangential slot blowing on the flowfield about a generic chined forebody at high angles of attack is investigated numerically using solutions of the thin-layer, Reynolds-averaged, Navier-Stokes equations. The effects of jet mass now ratios, angle of attack, and blowing slot location in the axial and circumferential directions are studied. The computed results compare well with available wind-tunnel experimental data. Computational results show that for a given mass now rate, the yawing moments generated by slot blowing increase as the body angle of attack increases. It is observed that greater changes in the yawing moments are produced by a slot located closest to the lip of the nose. Also, computational solutions show that inboard blowing across the top surface is more effective at generating yawing moments than blowing outboard from the bottom surface.

  2. GPU-Accelerated Voxelwise Hepatic Perfusion Quantification

    PubMed Central

    Wang, H; Cao, Y

    2012-01-01

    Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using CUDA-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, non-linear least squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626400 voxels in a patient’s liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10−6. The method will be useful for generating liver perfusion images in clinical settings. PMID:22892645

  3. Impact of a computer-assisted Screening, Brief Intervention and Referral to Treatment on reducing alcohol consumption among patients with hazardous drinking disorder in hospital emergency departments. The randomized BREVALCO trial.

    PubMed

    Duroy, David; Boutron, Isabelle; Baron, Gabriel; Ravaud, Philippe; Estellat, Candice; Lejoyeux, Michel

    2016-08-01

    To assess the impact of a computer-assisted Screening, Brief Intervention, and Referral to Treatment (SBIRT) on daily consumption of alcohol by patients with hazardous drinking disorder detected after systematic screening during their admission to an emergency department (ED). Two-arm, parallel group, multicentre, randomized controlled trial with a centralised computer-generated randomization procedure. Four EDs in university hospitals located in the Paris area in France. Patients admitted in the ED for any reason, with hazardous drinking disorder detected after systematic screening (i.e., Alcohol Use Disorder Identification Test score ≥5 for women and 8 for men OR self-reported alcohol consumption by week ≥7 drinks for women and 14 for men). The experimental intervention was computer-assisted SBIRT and the comparator was a placebo-controlled intervention (i.e., a computer-assisted education program on nutrition). Interventions were administered in the ED and followed by phone reinforcements at 1 and 3 months. The primary outcome was the mean number of alcohol drinks per day in the previous week, at 12 months. Results From May 2005 to February 2011, 286 patients were randomized to the computer-assisted SBIRT and 286 to the comparator intervention. The two groups did not differ in the primary outcome, with an adjusted mean difference of 0.12 (95% confidence interval, -0.88 to 1.11). There was no additional benefit of the computer-assisted alcohol SBIRT as compared with the computer-assisted education program on nutrition among patients with hazardous drinking disorder detected by systematic screening during their admission to an ED. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Computer-generated vs. physician-documented history of present illness (HPI): results of a blinded comparison.

    PubMed

    Almario, Christopher V; Chey, William; Kaung, Aung; Whitman, Cynthia; Fuller, Garth; Reid, Mark; Nguyen, Ken; Bolus, Roger; Dennis, Buddy; Encarnacion, Rey; Martinez, Bibiana; Talley, Jennifer; Modi, Rushaba; Agarwal, Nikhil; Lee, Aaron; Kubomoto, Scott; Sharma, Gobind; Bolus, Sally; Chang, Lin; Spiegel, Brennan M R

    2015-01-01

    Healthcare delivery now mandates shorter visits with higher documentation requirements, undermining the patient-provider interaction. To improve clinic visit efficiency, we developed a patient-provider portal that systematically collects patient symptoms using a computer algorithm called Automated Evaluation of Gastrointestinal Symptoms (AEGIS). AEGIS also automatically "translates" the patient report into a full narrative history of present illness (HPI). We aimed to compare the quality of computer-generated vs. physician-documented HPIs. We performed a cross-sectional study with a paired sample design among individuals visiting outpatient adult gastrointestinal (GI) clinics for evaluation of active GI symptoms. Participants first underwent usual care and then subsequently completed AEGIS. Each individual thereby had both a physician-documented and a computer-generated HPI. Forty-eight blinded physicians assessed HPI quality across six domains using 5-point scales: (i) overall impression, (ii) thoroughness, (iii) usefulness, (iv) organization, (v) succinctness, and (vi) comprehensibility. We compared HPI scores within patient using a repeated measures model. Seventy-five patients had both computer-generated and physician-documented HPIs. The mean overall impression score for computer-generated HPIs was higher than physician HPIs (3.68 vs. 2.80; P<0.001), even after adjusting for physician and visit type, location, mode of transcription, and demographics. Computer-generated HPIs were also judged more complete (3.70 vs. 2.73; P<0.001), more useful (3.82 vs. 3.04; P<0.001), better organized (3.66 vs. 2.80; P<0.001), more succinct (3.55 vs. 3.17; P<0.001), and more comprehensible (3.66 vs. 2.97; P<0.001). Computer-generated HPIs were of higher overall quality, better organized, and more succinct, comprehensible, complete, and useful compared with HPIs written by physicians during usual care in GI clinics.

  5. Evaluation of a Mobile Phone for Aircraft GPS Interference

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.

    2004-01-01

    Measurements of spurious emissions from a mobile phone are conducted in a reverberation chamber for the Global Positioning System (GPS) radio frequency band. This phone model was previously determined to have caused interference to several aircraft GPS receivers. Interference path loss (IPL) factors are applied to the emission data, and the outcome compared against GPS receiver susceptibility. The resulting negative safety margins indicate there are risks to aircraft GPS systems. The maximum emission level from the phone is also shown to be comparable with some laptop computer's emissions, implying that laptop computers can provide similar risks to aircraft GPS receivers.

  6. Comparative study of solar optics for paraboloidal concentrators

    NASA Technical Reports Server (NTRS)

    Wen, L.; Poon, P.; Carley, W.; Huang, L.

    1979-01-01

    Different analytical methods for computing the flux distribution on the focal plane of a paraboloidal solar concentrator are reviewed. An analytical solution in algebraic form is also derived for an idealized model. The effects resulting from using different assumptions in the definition of optical parameters used in these methodologies are compared and discussed in detail. These parameters include solar irradiance distribution (limb darkening and circumsolar), reflector surface specular spreading, surface slope error, and concentrator pointing inaccuracy. The type of computational method selected for use depends on the maturity of the design and the data available at the time the analysis is made.

  7. Comparative study of minutiae selection algorithms for ISO fingerprint templates

    NASA Astrophysics Data System (ADS)

    Vibert, B.; Charrier, C.; Le Bars, J.-M.; Rosenberger, C.

    2015-03-01

    We address the selection of fingerprint minutiae given a fingerprint ISO template. Minutiae selection plays a very important role when a secure element (i.e. a smart-card) is used. Because of the limited capability of computation and memory, the number of minutiae of a stored reference in the secure element is limited. We propose in this paper a comparative study of 6 minutiae selection methods including 2 methods from the literature and 1 like reference (No Selection). Experimental results on 3 fingerprint databases from the Fingerprint Verification Competition show their relative efficiency in terms of performance and computation time.

  8. Computational Study of a McDonnell Douglas Single-Stage-to-Orbit Vehicle Concept for Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    1996-01-01

    This paper presents the results of a computational flow analysis of the McDonnell Douglas single-stage-to-orbit vehicle concept designated as the 24U. This study was made to determine the aerodynamic characteristics of the vehicle with and without body flaps over an angle of attack range of 20-40 deg. Computations were made at a flight Mach number of 20 at 200,000 ft. altitude with equilibrium air, and a Mach number of 6 with CF4 gas. The software package FELISA (Finite Element Langley imperial College Sawansea Ames) was used for all the computations. The FELISA software consists of unstructured surface and volume grid generators, and inviscid flow solvers with (1) perfect gas option for subsonic, transonic, and low supersonic speeds, and (2) perfect gas, equilibrium air, and CF4 options for hypersonic speeds. The hypersonic flow solvers with equilibrium air and CF4 options were used in the present studies. Results are compared with other computational results and hypersonic CF4 tunnel test data.

  9. On the equivalence of spherical splines with least-squares collocation and Stokes's formula for regional geoid computation

    NASA Astrophysics Data System (ADS)

    Ophaug, Vegard; Gerlach, Christian

    2017-11-01

    This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.

  10. Computer Vision Syndrome and Associated Factors Among Medical and Engineering Students in Chennai

    PubMed Central

    Logaraj, M; Madhupriya, V; Hegde, SK

    2014-01-01

    Background: Almost all institutions, colleges, universities and homes today were using computer regularly. Very little research has been carried out on Indian users especially among college students the effects of computer use on the eye and vision related problems. Aim: The aim of this study was to assess the prevalence of computer vision syndrome (CVS) among medical and engineering students and the factors associated with the same. Subjects and Methods: A cross-sectional study was conducted among medical and engineering college students of a University situated in the suburban area of Chennai. Students who used computer in the month preceding the date of study were included in the study. The participants were surveyed using pre-tested structured questionnaire. Results: Among engineering students, the prevalence of CVS was found to be 81.9% (176/215) while among medical students; it was found to be 78.6% (158/201). A significantly higher proportion of engineering students 40.9% (88/215) used computers for 4-6 h/day as compared to medical students 10% (20/201) (P < 0.001). The reported symptoms of CVS were higher among engineering students compared with medical students. Students who used computer for 4-6 h were at significantly higher risk of developing redness (OR = 1.2, 95% CI = 1.0-3.1,P = 0.04), burning sensation (OR = 2.1,95% CI = 1.3-3.1, P < 0.01) and dry eyes (OR = 1.8, 95% CI = 1.1-2.9, P = 0.02) compared to those who used computer for less than 4 h. Significant correlation was found between increased hours of computer use and the symptoms redness, burning sensation, blurred vision and dry eyes. Conclusion: The present study revealed that more than three-fourth of the students complained of any one of the symptoms of CVS while working on the computer. PMID:24761234

  11. Performance analysis of three dimensional integral equation computations on a massively parallel computer. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Logan, Terry G.

    1994-01-01

    The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.

  12. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  13. The three-dimensional compressible flow in a radial inflow turbine scroll

    NASA Technical Reports Server (NTRS)

    Hamed, A.; Tabakoff, W.; Malak, M.

    1984-01-01

    This work presents the results of an analytical study and an experimental investigation of the three-dimensional flow in a turbine scroll. The finite element method is used in the iterative numerical solution of the locally linearized governing equations for the three-dimensional velocity potential field. The results of the numerical computations are compared with the experimental measurements in the scroll cross sections, which were obtained using laser Doppler velocimetry and hot wire techniques. The results of the computations show a variation in the flow conditions around the rotor periphery which was found to depend on the scroll geometry.

  14. Performance characteristics of three-phase induction motors

    NASA Technical Reports Server (NTRS)

    Wood, M. E.

    1977-01-01

    An investigation into the characteristics of three phase, 400 Hz, induction motors of the general type used on aircraft and spacecraft is summarized. Results of laboratory tests are presented and compared with results from a computer program. Representative motors were both tested and simulated under nominal conditions as well as off nominal conditions of temperature, frequency, voltage magnitude, and voltage balance. Good correlation was achieved between simulated and laboratory results. The primary purpose of the program was to verify the simulation accuracy of the computer program, which in turn will be used as an analytical tool to support the shuttle orbiter.

  15. Estimation of the laser cutting operating cost by support vector regression methodology

    NASA Astrophysics Data System (ADS)

    Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam

    2016-09-01

    Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.

  16. Analysis of components of variance in multiple-reader studies of computer-aided diagnosis with different tasks

    NASA Astrophysics Data System (ADS)

    Beiden, Sergey V.; Wagner, Robert F.; Campbell, Gregory; Metz, Charles E.; Chan, Heang-Ping; Nishikawa, Robert M.; Schnall, Mitchell D.; Jiang, Yulei

    2001-06-01

    In recent years, the multiple-reader, multiple-case (MRMC) study paradigm has become widespread for receiver operating characteristic (ROC) assessment of systems for diagnostic imaging and computer-aided diagnosis. We review how MRMC data can be analyzed in terms of the multiple components of the variance (case, reader, interactions) observed in those studies. Such information is useful for the design of pivotal studies from results of a pilot study and also for studying the effects of reader training. Recently, several of the present authors have demonstrated methods to generalize the analysis of multiple variance components to the case where unaided readers of diagnostic images are compared with readers who receive the benefit of a computer assist (CAD). For this case it is necessary to model the possibility that several of the components of variance might be reduced when readers incorporate the computer assist, compared to the unaided reading condition. We review results of this kind of analysis on three previously published MRMC studies, two of which were applications of CAD to diagnostic mammography and one was an application of CAD to screening mammography. The results for the three cases are seen to differ, depending on the reader population sampled and the task of interest. Thus, it is not possible to generalize a particular analysis of variance components beyond the tasks and populations actually investigated.

  17. Comparison of image features calculated in different dimensions for computer-aided diagnosis of lung nodules

    NASA Astrophysics Data System (ADS)

    Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.

    2009-02-01

    Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.

  18. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    NASA Astrophysics Data System (ADS)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  19. Calculating far-field radiated sound pressure levels from NASTRAN output

    NASA Technical Reports Server (NTRS)

    Lipman, R. R.

    1986-01-01

    FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.

  20. Simulation of isoelectro focusing processes. [stationary electrolysis of charged species

    NASA Technical Reports Server (NTRS)

    Palusinski, O. A.

    1980-01-01

    This paper presents the computer implementation of a model for the stationary electrolysis of two or more charged species. This has specific application to the technique of isoelectric focussing, in which the stationary electrolysis of ampholytes is used to generate a pH gradient useful for the separation of proteins, peptides and other biomolecules. The fundamental equations describing the process are given. These equations are transformed to a form suitable for digital computer implementation. Some results of computer simulation are described and compared to data obtained in the laboratory.

  1. Algorithms for Disconnected Diagrams in Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Konstantinos

    2016-11-01

    Computing disconnected diagrams in Lattice QCD (operator insertion in a quark loop) entails the computationally demanding problem of taking the trace of the all to all quark propagator. We first outline the basic algorithm used to compute a quark loop as well as improvements to this method. Then, we motivate and introduce an algorithm based on the synergy between hierarchical probing and singular value deflation. We present results for the chiral condensate using a 2+1-flavor clover ensemble and compare estimates of the nucleon charges with the basic algorithm.

  2. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  3. Real-time human collaboration monitoring and intervention

    DOEpatents

    Merkle, Peter B.; Johnson, Curtis M.; Jones, Wendell B.; Yonas, Gerold; Doser, Adele B.; Warner, David J.

    2010-07-13

    A method of and apparatus for monitoring and intervening in, in real time, a collaboration between a plurality of subjects comprising measuring indicia of physiological and cognitive states of each of the plurality of subjects, communicating the indicia to a monitoring computer system, with the monitoring computer system, comparing the indicia with one or more models of previous collaborative performance of one or more of the plurality of subjects, and with the monitoring computer system, employing the results of the comparison to communicate commands or suggestions to one or more of the plurality of subjects.

  4. Experimental Study of Impinging Jets Flow-Fields

    DTIC Science & Technology

    2016-07-27

    1 Grant # N000141410830 Experimental Study of Impinging Jet Flow-Fields Final Report for Period: Jun 15, 2014 – Jun 14, 2016 PI: Dennis K...impinging jet model in the absence of any jet heating. The results of the computations had been compared with the experimental data produced in the...of the validity of the computations, and also of the experimental approach. Figure 12a. Initial single

  5. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    PubMed

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  6. The Efficacy of Web-Based and Print-Delivered Computer-Tailored Interventions to Reduce Fat Intake: Results of a Randomized, Controlled Trial

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Oenema, Anke; Campbell, Marci; Brug, Johannes

    2008-01-01

    Objective: To test and compare the efficacy of interactive- and print-delivered computer-tailored nutrition education targeting saturated fat intake reduction. Design: A 3-group randomized, controlled trial (2003-2005) with posttests at 1 and 6 months post-intervention. Setting: Worksites and 2 neighborhoods in the urban area of Rotterdam.…

  7. Does It Matter if I Take My Mathematics Test on Computer? A Second Empirical Study of Mode Effects in NAEP

    ERIC Educational Resources Information Center

    Bennett, Randy Elliot; Braswell, James; Oranje, Andreas; Sandene, Brent; Kaplan, Bruce; Yan, Fred

    2008-01-01

    This article describes selected results from the Math Online (MOL) study, one of three field investigations sponsored by the National Center for Education Statistics (NCES) to explore the use of new technology in NAEP. Of particular interest in the MOL study was the comparability of scores from paper- and computer-based tests. A nationally…

  8. Proteinortho: Detection of (Co-)orthologs in large-scale analysis

    PubMed Central

    2011-01-01

    Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987

  9. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  10. Comparing alchemical and physical pathway methods for computing the absolute binding free energy of charged ligands.

    PubMed

    Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald

    2018-06-13

    Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.

  11. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  12. Application Performance Analysis and Efficient Execution on Systems with multi-core CPUs, GPUs and MICs: A Case Study with Microscopy Image Analysis

    PubMed Central

    Teodoro, George; Kurc, Tahsin; Andrade, Guilherme; Kong, Jun; Ferreira, Renato; Saltz, Joel

    2015-01-01

    We carry out a comparative performance study of multi-core CPUs, GPUs and Intel Xeon Phi (Many Integrated Core-MIC) with a microscopy image analysis application. We experimentally evaluate the performance of computing devices on core operations of the application. We correlate the observed performance with the characteristics of computing devices and data access patterns, computation complexities, and parallelization forms of the operations. The results show a significant variability in the performance of operations with respect to the device used. The performances of operations with regular data access are comparable or sometimes better on a MIC than that on a GPU. GPUs are more efficient than MICs for operations that access data irregularly, because of the lower bandwidth of the MIC for random data accesses. We propose new performance-aware scheduling strategies that consider variabilities in operation speedups. Our scheduling strategies significantly improve application performance compared to classic strategies in hybrid configurations. PMID:28239253

  13. Fusion Imaging: A Novel Staging Modality in Testis Cancer

    PubMed Central

    Sterbis, Joseph R.; Rice, Kevin R.; Javitt, Marcia C.; Schenkman, Noah S.; Brassell, Stephen A.

    2010-01-01

    Objective: Computed tomography and chest radiographs provide the standard imaging for staging, treatment, and surveillance of testicular germ cell neoplasms. Positron emission tomography has recently been utilized for staging, but is somewhat limited in its ability to provide anatomic localization. Fusion imaging combines the metabolic information provided by positron emission tomography with the anatomic precision of computed tomography. To the best of our knowledge, this represents the first study of the effectiveness using fusion imaging in evaluation of patients with testis cancer. Methods: A prospective study of 49 patients presenting to Walter Reed Army Medical Center with testicular cancer from 2003 to 2009 was performed. Fusion imaging was compared with conventional imaging, tumor markers, pathologic results, and clinical follow-up. Results: There were 14 true positives, 33 true negatives, 1 false positive, and 1 false negative. Sensitivity, specificity, positive predictive value, and negative predictive value were 93.3, 97.0, 93.3, and 97.0% respectively. In 11 patient scenarios, fusion imaging differed from conventional imaging. Utility was found in superior lesion detection compared to helical computed tomography due to anatomical/functional image co-registration, detection of micrometastasis in lymph nodes (pathologic nodes < 1cm), surveillance for recurrence post-chemotherapy, differentiating fibrosis from active disease in nodes < 2.5cm, and acting as a quality assurance measure to computed tomography alone. Conclusions: In addition to demonstrating a sensitivity and specificity comparable or superior to conventional imaging, fusion imaging shows promise in providing additive data that may assist in clinical decision-making. PMID:21103077

  14. Prevalence of Obesity and Associated Risk Factors Among Adolescents in Ankara, Turkey

    PubMed Central

    Ercan, Sırma; Dallar, Yıldız Bilge; Önen, Serdar; Engiz, Özlem

    2012-01-01

    Objective: The purpose of this study was to investigate the prevalence of and the risk factors associated with obesity among adolescents in Ankara, Turkey. Methods: The study was conducted in 26 schools in Ankara during the time period from September 2010 to March 2011. A total of 8848 adolescents aged 11-18 years were chosen using a population-based stratified cluster sampling method. Body mass index (BMI) of the participants was compared with the BMI references for Turkish children and adolescents to estimate the prevalence of overweight and obesity. A standardized questionnaire aiming to determine the sociodemographic characteristics, computer use, television (TV) watching, physical activity, and presence of obesity in the family was applied to the study group. Results: The results showed that the overall prevalence of obesity among adolescents was 7.7% (8.4 % for females and 7.0% for males). It was observed that BMI increased as computer use increased. A greater proportion of the overweight and obese adolescents watched TV and use computer for more than 2 hours/day as compared to their normal-weight counterparts. The normal-weight subjects were found to show a higher participation in regular physical activity. Obesity prevalence among the families of obese adolescents was 56.5%. Conclusions: The prevalence of adolescent obesity in Ankara, Turkey is lower as compared to many European countries and to the United States. Computer use, watching TV, physical activity and family factors are important risk factors for obesity. Conflict of interest:None declared. PMID:23149433

  15. Detailed T1-Weighted Profiles from the Human Cortex Measured in Vivo at 3 Tesla MRI.

    PubMed

    Ferguson, Bart; Petridou, Natalia; Fracasso, Alessio; van den Heuvel, Martijn P; Brouwer, Rachel M; Hulshoff Pol, Hilleke E; Kahn, René S; Mandl, René C W

    2018-04-01

    Studies into cortical thickness in psychiatric diseases based on T1-weighted MRI frequently report on aberrations in the cerebral cortex. Due to limitations in image resolution for studies conducted at conventional MRI field strengths (e.g. 3 Tesla (T)) this information cannot be used to establish which of the cortical layers may be implicated. Here we propose a new analysis method that computes one high-resolution average cortical profile per brain region extracting myeloarchitectural information from T1-weighted MRI scans that are routinely acquired at a conventional field strength. To assess this new method, we acquired standard T1-weighted scans at 3 T and compared them with state-of-the-art ultra-high resolution T1-weighted scans optimised for intracortical myelin contrast acquired at 7 T. Average cortical profiles were computed for seven different brain regions. Besides a qualitative comparison between the 3 T scans, 7 T scans, and results from literature, we tested if the results from dynamic time warping-based clustering are similar for the cortical profiles computed from 7 T and 3 T data. In addition, we quantitatively compared cortical profiles computed for V1, V2 and V7 for both 7 T and 3 T data using a priori information on their relative myelin concentration. Although qualitative comparisons show that at an individual level average profiles computed for 7 T have more pronounced features than 3 T profiles the results from the quantitative analyses suggest that average cortical profiles computed from T1-weighted scans acquired at 3 T indeed contain myeloarchitectural information similar to profiles computed from the scans acquired at 7 T. The proposed method therefore provides a step forward to study cortical myeloarchitecture in vivo at conventional magnetic field strength both in health and disease.

  16. Investigation of mass transfer intensification under power ultrasound irradiation using 3D computational simulation: A comparative analysis.

    PubMed

    Sajjadi, Baharak; Asgharzadehahmadi, Seyedali; Asaithambi, Perumal; Raman, Abdul Aziz Abdul; Parthasarathy, Rajarathinam

    2017-01-01

    This paper aims at investigating the influence of acoustic streaming induced by low-frequency (24kHz) ultrasound irradiation on mass transfer in a two-phase system. The main objective is to discuss the possible mass transfer improvements under ultrasound irradiation. Three analyses were conducted: i) experimental analysis of mass transfer under ultrasound irradiation; ii) comparative analysis between the results of the ultrasound assisted mass transfer with that obtained from mechanically stirring; and iii) computational analysis of the systems using 3D CFD simulation. In the experimental part, the interactive effects of liquid rheological properties, ultrasound power and superficial gas velocity on mass transfer were investigated in two different sonicators. The results were then compared with that of mechanical stirring. In the computational part, the results were illustrated as a function of acoustic streaming behaviour, fluid flow pattern, gas/liquid volume fraction and turbulence in the two-phase system and finally the mass transfer coefficient was specified. It was found that additional turbulence created by ultrasound played the most important role on intensifying the mass transfer phenomena compared to that in stirred vessel. Furthermore, long residence time which depends on geometrical parameters is another key for mass transfer. The results obtained in the present study would help researchers understand the role of ultrasound as an energy source and acoustic streaming as one of the most important of ultrasound waves on intensifying gas-liquid mass transfer in a two-phase system and can be a breakthrough in the design procedure as no similar studies were found in the existing literature. Copyright © 2016. Published by Elsevier B.V.

  17. An approach to quantum-computational hydrologic inverse analysis

    DOE PAGES

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  18. An approach to quantum-computational hydrologic inverse analysis.

    PubMed

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  19. An approach to quantum-computational hydrologic inverse analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  20. Insomnia symptoms among Greek adolescent students with excessive computer use

    PubMed Central

    Siomos, K E; Braimiotis, D; Floros, G D; Dafoulis, V; Angelopoulos, N V

    2010-01-01

    Background: The aim of the present study is to assess the intensity of computer use and insomnia epidemiology among Greek adolescents, to examine any possible age and gender differences and to investigate whether excessive computer use is a risk factor for developing insomnia symptoms. Patients and Methods: Cross-sectional study of a stratified sample of 2195 high school students. Demographic data were recorded and two specific questionnaires were used, the Adolescent Computer Addiction Test (ACAT) and the Athens Insomnia Scale (AIS). Results: Females scored higher than males on insomnia complaints but lower on computer use and addiction. A dosemediated effect of computer use on insomnia complaints was recorded. Computer use had a larger effect size than sex on insomnia complaints. Duration of computer use was longer for those adolescents classified as suffering from insomnia compared to those who were not. Conclusions: Computer use can be a significant cause of insomnia complaints in an adolescent population regardless of whether the individual is classified as addicted or not. PMID:20981171

  1. Computer-based training for safety: comparing methods with older and younger workers.

    PubMed

    Wallen, Erik S; Mulloy, Karen B

    2006-01-01

    Computer-based safety training is becoming more common and is being delivered to an increasingly aging workforce. Aging results in a number of changes that make it more difficult to learn from certain types of computer-based training. Instructional designs derived from cognitive learning theories may overcome some of these difficulties. Three versions of computer-based respiratory safety training were shown to older and younger workers who then took a high and a low level learning test. Younger workers did better overall. Both older and younger workers did best with the version containing text with pictures and audio narration. Computer-based training with pictures and audio narration may be beneficial for workers over 45 years of age. Computer-based safety training has advantages but workers of different ages may benefit differently. Computer-based safety programs should be designed and selected based on their ability to effectively train older as well as younger learners.

  2. A resource management architecture based on complex network theory in cloud computing federation

    NASA Astrophysics Data System (ADS)

    Zhang, Zehua; Zhang, Xuejie

    2011-10-01

    Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.

  3. Creation and use of a survey instrument for comparing mobile computing devices.

    PubMed

    Macri, Jennifer M; Lee, Paul P; Silvey, Garry M; Lobach, David F

    2005-01-01

    Both personal digital assistants (PDAs) and tablet computers have emerged to facilitate data collection at the point of care. However, little research has been reported comparing these mobile computing devices in specific care settings. In this study we present an approach for comparing functionally identical applications on a Palm operating system-based PDA and a Windows-based tablet computer for point-of-care documentation of clinical observations by eye care professionals when caring for patients with diabetes. Eye-care professionals compared the devices through focus group sessions and through validated usability surveys. This poster describes the development and use of the survey instrument used for comparing mobile computing devices.

  4. Cloud computing for comparative genomics.

    PubMed

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrationsmore » in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.« less

  6. APORT: a program for the area-based apportionment of county variables to cells of a polar grid. [Airborne pollutant transport models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, D.E.; Little, C.A.

    1978-11-01

    The APORT computer code was developed to apportion variables tabulated for polygon-structured civil districts onto cells of a polar grid. The apportionment is based on fractional overlap between the polygon and the grid cells. Centering the origin of the polar system at a pollutant source site yields results that are very useful for assessing and interpreting the effects of airborne pollutant dissemination. The APOPLT graphics code, which uses the same data set as APORT, provides a convenient visual display of the polygon structure and the extent of the polar grid. The APORT/APOPLT methodology was verified by application to county summariesmore » of cattle population for counties surrounding the Oyster Creek, New Jersey, nuclear power plant. These numerical results, which were obtained using approximately 2-min computer time on an IBM System 360/91 computer, compare favorably to results of manual computations in both speed and accuracy.« less

  7. Using FOCUS to determine the radiation impedance for square transducers

    NASA Astrophysics Data System (ADS)

    Jennings, Matthew R.; McGough, Robert J.

    2012-10-01

    The power radiated by an ultrasound transducer is calculated with the radiation resistance, which is the real part of the radiation impedance. For circular transducers, an analytical solution for the radiation impedance is known, but an analytical expression for the radiation impedance is not available for rectangular or square transducers. To determine the radiation resistance in FOCUS, the pressure on the surface of a square transducer is computed with the fast nearfield method, and then the force on the transducer face is computed by integrating the pressure. Results using this approach are numerically evaluated for a range of ka values from 0.1 to 16. The pressure on the transducer face is also computed with the Rayleigh-Sommerfeld integral, and the results are compared. The numerical value of the radiation resistance computed with FOCUS and with the Rayleigh-Sommerfeld integral converge to the same value, although FOCUS calculates the same result in about one-quarter of the time.

  8. Computer printing and filing of microbiology reports. 2. Evaluation and comparison with a manual system, and comparison of two manual systems.

    PubMed Central

    Goodwin, C S

    1976-01-01

    A manual system of microbiology reporting with a National Cash Register (NCR) form with printed names of bacteria and antiboitics required less time to compose reports than a previous manual system that involved rubber stamps and handwriting on plain report sheets. The NCR report cost 10-28 pence and, compared with a computer system, it had the advantages of simplicity and familarity, and reports were not delayed by machine breakdown, operator error, or data being incorrectly submitted. A computer reporting system for microbiology resulted in more accurate reports costing 17-97 pence each, faster and more accurate filing and recall of reports, and a greater range of analyses of reports that was valued particularly by the control-of-infection staff. Composition of computer-readable reports by technicians on Port-a-punch cards took longer than composing NCR reports. Enquiries for past results were more quickly answered from computer printouts of reports and a day book in alphabetical order. PMID:939810

  9. An evaluation of four single element airfoil analytic methods

    NASA Technical Reports Server (NTRS)

    Freuler, R. J.; Gregorek, G. M.

    1979-01-01

    A comparison of four computer codes for the analysis of two-dimensional single element airfoil sections is presented for three classes of section geometries. Two of the computer codes utilize vortex singularities methods to obtain the potential flow solution. The other two codes solve the full inviscid potential flow equation using finite differencing techniques, allowing results to be obtained for transonic flow about an airfoil including weak shocks. Each program incorporates boundary layer routines for computing the boundary layer displacement thickness and boundary layer effects on aerodynamic coefficients. Computational results are given for a symmetrical section represented by an NACA 0012 profile, a conventional section illustrated by an NACA 65A413 profile, and a supercritical type section for general aviation applications typified by a NASA LS(1)-0413 section. The four codes are compared and contrasted in the areas of method of approach, range of applicability, agreement among each other and with experiment, individual advantages and disadvantages, computer run times and memory requirements, and operational idiosyncrasies.

  10. Computational analysis of forebody tangential slot blowing

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Agosta-Greenman, Roxana M.; Rizk, Yehia M.; Schiff, Lewis B.; Cummings, Russell M.

    1994-01-01

    An overview of the computational effort to analyze forebody tangential slot blowing is presented. Tangential slot blowing generates side force and yawing moment which may be used to control an aircraft flying at high-angle-of-attack. Two different geometries are used in the analysis: (1) The High Alpha Research Vehicle; and (2) a generic chined forebody. Computations using the isolated F/A-18 forebody are obtained at full-scale wind tunnel test conditions for direct comparison with available experimental data. The effects of over- and under-blowing on force and moment production are analyzed. Time-accurate solutions using the isolated forebody are obtained to study the force onset timelag of tangential slot blowing. Computations using the generic chined forebody are obtained at experimental wind tunnel conditions, and the results compared with available experimental data. This computational analysis compliments the experimental results and provides a detailed understanding of the effects of tangential slot blowing on the flow field about simple and complex geometries.

  11. Numerical method to compute acoustic scattering effect of a moving source.

    PubMed

    Song, Hao; Yi, Mingxu; Huang, Jun; Pan, Yalin; Liu, Dawei

    2016-01-01

    In this paper, the aerodynamic characteristic of a ducted tail rotor in hover has been numerically studied using CFD method. An analytical time domain formulation based on Ffowcs Williams-Hawkings (FW-H) equation is derived for the prediction of the acoustic velocity field and used as Neumann boundary condition on a rigid scattering surface. In order to predict the aerodynamic noise, a hybrid method combing computational aeroacoustics with an acoustic thin-body boundary element method has been proposed. The aerodynamic results and the calculated sound pressure levels (SPLs) are compared with the known method for validation. Simulation results show that the duct can change the value of SPLs and the sound directivity. Compared with the isolate tail rotor, the SPLs of the ducted tail rotor are smaller at certain azimuth.

  12. Comparison between measured and computed magnetic flux density distribution of simulated transformer core joints assembled from grain-oriented and non-oriented electrical steel

    NASA Astrophysics Data System (ADS)

    Shahrouzi, Hamid; Moses, Anthony J.; Anderson, Philip I.; Li, Guobao; Hu, Zhuochao

    2018-04-01

    The flux distribution in an overlapped linear joint constructed in the central region of an Epstein Square was studied experimentally and results compared with those obtained using a computational magnetic field solver. High permeability grain-oriented (GO) and low permeability non-oriented (NO) electrical steels were compared at a nominal core flux density of 1.60 T at 50 Hz. It was found that the experimental results only agreed well at flux densities at which the reluctance of different paths of the flux are similar. Also it was revealed that the flux becomes more uniform when the working point of the electrical steel is close to the knee point of the B-H curve of the steel.

  13. Lattice QCD static potentials of the meson-meson and tetraquark systems computed with both quenched and full QCD

    NASA Astrophysics Data System (ADS)

    Bicudo, P.; Cardoso, M.; Oliveira, O.; Silva, P. J.

    2017-10-01

    We revisit the static potential for the Q Q Q ¯Q ¯ system using SU(3) lattice simulations, studying both the color singlets' ground state and first excited state. We consider geometries where the two static quarks and the two antiquarks are at the corners of rectangles of different sizes. We analyze the transition between a tetraquark system and a two-meson system with a two by two correlator matrix. We compare the potentials computed with quenched QCD and with dynamical quarks. We also compare our simulations with the results of previous studies and analyze quantitatively fits of our results with Ansätze inspired in the string flip-flop model and in its possible color excitations.

  14. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  15. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  16. Numerical Computation of Homogeneous Slope Stability

    PubMed Central

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS). PMID:25784927

  17. Numerical computation of homogeneous slope stability.

    PubMed

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).

  18. Application of the implicit MacCormack scheme to the PNS equations

    NASA Technical Reports Server (NTRS)

    Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.

    1983-01-01

    The two-dimensional parabolized Navier-Stokes equations are solved using MacCormack's (1981) implicit finite-difference scheme. It is shown that this method for solving the parabolized Navier-Stokes equations does not require the inversion of block tridiagonal systems of algebraic equations and allows the original explicit scheme to be employed in those regions where implicit treatment is not needed. The finite-difference algorithm is discussed and the computational results for two laminar test cases are presented. Results obtained using this method for the case of a flat plate boundary layer are compared with those obtained using the conventional Beam-Warming scheme, as well as those obtained from a boundary layer code. The computed results for a more severe test of the method, the hypersonic flow past a 15 deg compression corner, are found to compare favorably with experiment and a numerical solution of the complete Navier-Stokes equations.

  19. On the application of the lattice Boltzmann method to the investigation of glottal flow

    PubMed Central

    Kucinschi, Bogdan R.; Afjeh, Abdollah A.; Scherer, Ronald C.

    2008-01-01

    The production of voice is directly related to the vibration of the vocal folds, which is generated by the interaction between the glottal flow and the tissue of the vocal folds. In the current study, the aerodynamics of the symmetric glottis is investigated numerically for a number of static configurations. The numerical investigation is based on the lattice Boltzmann method (LBM), which is an alternative approach within computational fluid dynamics. Compared to the traditional Navier–Stokes computational fluid dynamics methods, the LBM is relatively easy to implement and can deal with complex geometries without requiring a dedicated grid generator. The multiple relaxation time model was used to improve the numerical stability. The results obtained with LBM were compared to the results provided by a traditional Navier–Stokes solver and experimental data. It was shown that LBM results are satisfactory for all the investigated cases. PMID:18646995

  20. Learning Oceanography from a Computer Simulation Compared with Direct Experience at Sea

    ERIC Educational Resources Information Center

    Winn, William; Stahr, Frederick; Sarason, Christian; Fruland, Ruth; Oppenheimer, Peter; Lee, Yen-Ling

    2006-01-01

    Considerable research has compared how students learn science from computer simulations with how they learn from "traditional" classes. Little research has compared how students learn science from computer simulations with how they learn from direct experience in the real environment on which the simulations are based. This study compared two…

  1. A modeling approach to predict acoustic nonlinear field generated by a transmitter with an aluminum lens.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Chen, Tao; Li, Faqi; Zhang, Dong

    2011-09-01

    In this work, the authors propose a modeling approach to compute the nonlinear acoustic field generated by a flat piston transmitter with an attached aluminum lens. In this approach, the geometrical parameters (radius and focal length) of a virtual source are initially determined by Snell's refraction law and then adjusted based on the Rayleigh integral result in the linear case. Then, this virtual source is used with the nonlinear spheroidal beam equation (SBE) model to predict the nonlinear acoustic field in the focal region. To examine the validity of this approach, the calculated nonlinear result is compared with those from the Westervelt and (Khokhlov-Zabolotskaya-Kuznetsov) KZK equations for a focal intensity of 7 kW/cm(2). Results indicate that this approach could accurately describe the nonlinear acoustic field in the focal region with less computation time. The proposed modeling approach is shown to accurately describe the nonlinear acoustic field in the focal region. Compared with the Westervelt equation, the computation time of this approach is significantly reduced. It might also be applicable for the widely used concave focused transmitter with a large aperture angle.

  2. Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card

    NASA Astrophysics Data System (ADS)

    Jiang, Jinpeng; Zhu, Peimin

    2018-05-01

    Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.

  3. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1987-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  4. A multiple-time-scale turbulence model based on variable partitioning of the turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  5. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  6. Ergonomics standards and guidelines for computer workstation design and the impact on users' health - a review.

    PubMed

    Woo, E H C; White, P; Lai, C W K

    2016-03-01

    This paper presents an overview of global ergonomics standards and guidelines for design of computer workstations, with particular focus on their inconsistency and associated health risk impact. Overall, considerable disagreements were found in the design specifications of computer workstations globally, particularly in relation to the results from previous ergonomics research and the outcomes from current ergonomics standards and guidelines. To cope with the rapid advancement in computer technology, this article provides justifications and suggestions for modifications in the current ergonomics standards and guidelines for the design of computer workstations. Practitioner Summary: A research gap exists in ergonomics standards and guidelines for computer workstations. We explore the validity and generalisability of ergonomics recommendations by comparing previous ergonomics research through to recommendations and outcomes from current ergonomics standards and guidelines.

  7. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    PubMed

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  8. Noise optimization of a regenerative automotive fuel pump

    NASA Astrophysics Data System (ADS)

    Wang, J. F.; Feng, H. H.; Mou, X. L.; Huang, Y. X.

    2017-03-01

    The regenerative pump used in automotive is facing a noise problem. To understand the mechanism in detail, Computational Fluid Dynamics (CFD) and Computational Acoustic Analysis (CAA) together were used to understand the fluid and acoustic characteristics of the fuel pump using ANSYS-CFX 15.0 and LMS Virtual. Lab Rev12, respectively. The CFD model and acoustical model were validated by mass flow rate test and sound pressure test, respectively. Comparing the computational and experimental results shows that sound pressure levels at the observer position are consistent at high frequencies, especially at blade passing frequency. After validating the models, several numerical models were analyzed in the study for noise improvement. It is observed that for configuration having greater number of impeller blades, noise level was significantly improved at blade passing frequency, when compared to that of the original model.

  9. Photographic zenith tube of the Zvenigorod INASAN Observatory - Processing method and observation results

    NASA Astrophysics Data System (ADS)

    Yurov, E. A.

    1992-10-01

    A method for the reduction of PZT plates which has been used at the Zvenigorod INASAN Observatory since 1986 is described. The formulas used for computing the coordinates of the stellar image in the focal plane at the midpoint of the exposure are correct to 0.0024 arcsec. Observations from February 1986 to October 1988 are compared with data of BIH and IERS, and the results of the comparison are used to compute the amplitudes of the annual terms of nonpolar variations in the observed latitudes and Delta(UTI).

  10. Simulation of blast action on civil structures using ANSYS Autodyn

    NASA Astrophysics Data System (ADS)

    Fedorova, N. N.; Valger, S. A.; Fedorov, A. V.

    2016-10-01

    The paper presents the results of 3D numerical simulations of shock wave spreading in cityscape area. ANSYS Autodyne software is used for the computations. Different test cases are investigated numerically. On the basis of the computations, the complex transient flowfield structure formed in the vicinity of prismatic bodies was obtained and analyzed. The simulation results have been compared to the experimental data. The ability of two numerical schemes is studied to correctly predict the pressure history in several gauges placed on walls of the obstacles.

  11. Computations of Torque-Balanced Coaxial Rotor Flows

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Chan, William M.; Pulliam, Thomas H.

    2017-01-01

    Interactional aerodynamics has been studied for counter-rotating coaxial rotors in hover. The effects of torque balancing on the performance of coaxial-rotor systems have been investigated. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, and a hybrid turbulence model. Computational results for an experimental model are compared to available data. The results for a coaxial quadcopter vehicle with and without torque balancing are discussed. Understanding interactions in coaxial-rotor flows would help improve the design of next-generation autonomous drones.

  12. Study of effects of injector geometry on fuel-air mixing and combustion

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.; Roach, R. L.

    1977-01-01

    An implicit finite-difference method has been developed for computing the flow in the near field of a fuel injector as part of a broader study of the effects of fuel injector geometry on fuel-air mixing and combustion. Detailed numerical results have been obtained for cases of laminar and turbulent flow without base injection, corresponding to the supersonic base flow problem. These numerical results indicated that the method is stable and convergent, and that significant savings in computer time can be achieved, compared with explicit methods.

  13. Computer versus lecture: a comparison of two methods of teaching oral medication administration in a nursing skills laboratory.

    PubMed

    Jeffries, P R

    2001-10-01

    The purpose of this study was to compare the effectiveness of both an interactive, multimedia CD-ROM and a traditional lecture for teaching oral medication administration to nursing students. A randomized pretest/posttest experimental design was used. Forty-two junior baccalaureate nursing students beginning their fundamentals nursing course were recruited for this study at a large university in the midwestern United States. The students ranged in age from 19 to 45. Seventy-three percent reported having average computer skills and experience, while 15% reported poor to below average skills. Two methods were compared for teaching oral medication administration--a scripted lecture with black and white overhead transparencies, in addition to an 18-minute videotape on medication administration, and an interactive, multimedia CD-ROM program, covering the same content. There were no significant (p < .05) baseline differences between the computer and lecture groups by education or computer skills. Results showed significant differences between the two groups in cognitive gains and student satisfaction (p = .01), with the computer group demonstrating higher student satisfaction and more cognitive gains than the lecture group. The groups were similar in their ability to demonstrate the skill correctly. Importantly, time on task using the CD-ROM was less, with 96% of the learners completing the program in 2 hours or less, compared to 3 hours of class time for the lecture group.

  14. The multifacet graphically contracted function method. I. Formulation and implementation

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R.

    2014-08-01

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N2n4) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.

  15. The multifacet graphically contracted function method. I. Formulation and implementation.

    PubMed

    Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R

    2014-08-14

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N(2)n(4)) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.

  16. Noise prediction of a subsonic turbulent round jet using the lattice-Boltzmann method

    PubMed Central

    Lew, Phoi-Tack; Mongeau, Luc; Lyrintzis, Anastasios

    2010-01-01

    The lattice-Boltzmann method (LBM) was used to study the far-field noise generated from a Mach, Mj=0.4, unheated turbulent axisymmetric jet. A commercial code based on the LBM kernel was used to simulate the turbulent flow exhausting from a pipe which is 10 jet radii in length. Near-field flow results such as jet centerline velocity decay rates and turbulence intensities were in agreement with experimental results and results from comparable LES studies. The predicted far field sound pressure levels were within 2 dB from published experimental results. Weak unphysical tones were present at high frequency in the computed radiated sound pressure spectra. These tones are believed to be due to spurious sound wave reflections at boundaries between regions of varying voxel resolution. These “VR tones” did not appear to bias the underlying broadband noise spectrum, and they did not affect the overall levels significantly. The LBM appears to be a viable approach, comparable in accuracy to large eddy simulations, for the problem considered. The main advantages of this approach over Navier–Stokes based finite difference schemes may be a reduced computational cost, ease of including the nozzle in the computational domain, and ease of investigating nozzles with complex shapes. PMID:20815448

  17. Electromagnetic Showers at High Energy

    ERIC Educational Resources Information Center

    Loos, J. S.; Dawson, S. L.

    1978-01-01

    Some of the properties of electromagnetic showers observed in an experimental study are illustrated. Experimental data and results from quantum electrodynamics are discussed. Data and theory are compared using computer simulation. (BB)

  18. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  19. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  20. Laser Spot Detection Based on Reaction Diffusion.

    PubMed

    Vázquez-Otero, Alejandro; Khikhlukha, Danila; Solano-Altamirano, J M; Dormido, Raquel; Duro, Natividad

    2016-03-01

    Center-location of a laser spot is a problem of interest when the laser is used for processing and performing measurements. Measurement quality depends on correctly determining the location of the laser spot. Hence, improving and proposing algorithms for the correct location of the spots are fundamental issues in laser-based measurements. In this paper we introduce a Reaction Diffusion (RD) system as the main computational framework for robustly finding laser spot centers. The method presented is compared with a conventional approach for locating laser spots, and the experimental results indicate that RD-based computation generates reliable and precise solutions. These results confirm the flexibility of the new computational paradigm based on RD systems for addressing problems that can be reduced to a set of geometric operations.

  1. An Inviscid Computational Study of Three '07 Mars Lander Aeroshell Configurations Over a Mach Number Range of 2.3 to 4.5

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Sutton, Kenneth (Technical Monitor)

    2001-01-01

    This report documents the results of a study conducted to compute the inviscid longitudinal aerodynamic characteristics of three aeroshell configurations of the proposed '07 Mars lander. This was done in support of the activity to design a smart lander for the proposed '07 Mars mission. In addition to the three configurations with tabs designated as the shelf, the canted, and the Ames, the baseline configuration (without tab) was also studied. The unstructured grid inviscid CFD software FELISA was used, and the longitudinal aerodynamic characteristics of the four configurations were computed for Mach number of 2.3, 2.7, 3.5, and 4.5, and for an angle of attack range of -4 to 20 degrees. Wind tunnel tests had been conducted on scale models of these four configurations in the Unitary Plan Wind Tunnel, NASA Langley Research Center. Present computational results are compared with the data from these tests. Some differences are noticed between the two results, particularly at the lower Mach numbers. These differences are attributed to the pressures acting on the aft body. Most of the present computations were done on the forebody only. Additional computations were done on the full body (forebody and afterbody) for the baseline and the Shelf configurations. Results of some computations done (to simulate flight conditions) with the Mars gas option and with an effective gamma are also included.

  2. The hierarchical expert tuning of PID controllers using tools of soft computing.

    PubMed

    Karray, F; Gueaieb, W; Al-Sharhan, S

    2002-01-01

    We present soft computing-based results pertaining to the hierarchical tuning process of PID controllers located within the control loop of a class of nonlinear systems. The results are compared with PID controllers implemented either in a stand alone scheme or as a part of conventional gain scheduling structure. This work is motivated by the increasing need in the industry to design highly reliable and efficient controllers for dealing with regulation and tracking capabilities of complex processes characterized by nonlinearities and possibly time varying parameters. The soft computing-based controllers proposed are hybrid in nature in that they integrate within a well-defined hierarchical structure the benefits of hard algorithmic controllers with those having supervisory capabilities. The controllers proposed also have the distinct features of learning and auto-tuning without the need for tedious and computationally extensive online systems identification schemes.

  3. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  4. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  5. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  6. Computation of the acoustic radiation force using the finite-difference time-domain method.

    PubMed

    Cai, Feiyan; Meng, Long; Jiang, Chunxiang; Pan, Yu; Zheng, Hairong

    2010-10-01

    The computational details related to calculating the acoustic radiation force on an object using a 2-D grid finite-difference time-domain method (FDTD) are presented. The method is based on propagating the stress and velocity fields through the grid and determining the energy flow with and without the object. The axial and radial acoustic radiation forces predicted by FDTD method are in excellent agreement with the results obtained by analytical evaluation of the scattering method. In particular, the results indicate that it is possible to trap the steel cylinder in the radial direction by optimizing the width of Gaussian source and the operation frequency. As the sizes of the relating objects are smaller than or comparable to wavelength, the algorithm presented here can be easily extended to 3-D and include torque computation algorithms, thus providing a highly flexible and universally usable computation engine.

  7. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  8. Study of high speed complex number algorithms. [for determining antenna for field radiation patterns

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1981-01-01

    A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.

  9. Performance of the MIR Cooperative Solar Array After 2.5 Years in Orbit

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Hoffman, David J.

    1999-01-01

    The Mir Cooperative Solar Array (MCSA) was developed jointly by the United States and Russia to produce 6 kW of power for the Russian space station Mir. Four, multi-orbit test sequences were executed between June 1996 and December 1998 to measure MCSA electrical performance. A dedicated Fortran computer code was developed to analyze the detailed thermal-electrical performance of the MCSA. The computational performance results compared very favorably with the measured flight data in most cases. Minor performance degradation was detected in one current generating section of the MCSA. Yet overall, the flight data indicated the MCSA was meeting and exceeding performance expectations. There was no precipitous performance loss due to contamination or other causes after 2.5 years of operation. In this paper, we review the MCSA flight electrical performance tests, data and computational modeling and discuss findings from data comparisons with the computational results.

  10. Identification of natural images and computer-generated graphics based on statistical and textural features.

    PubMed

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  11. Using Tutte polynomials to analyze the structure of the benzodiazepines

    NASA Astrophysics Data System (ADS)

    Cadavid Muñoz, Juan José

    2014-05-01

    Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.

  12. A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria

    ERIC Educational Resources Information Center

    Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

    2013-01-01

    The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…

  13. A comparison of fitness-case sampling methods for genetic programming

    NASA Astrophysics Data System (ADS)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  14. Evaluation of Computer Based Testing in lieu of Regular Examinations in Computer Literacy

    NASA Astrophysics Data System (ADS)

    Murayama, Koichi

    Because computer based testing (CBT) has many advantages compared with the conventional paper and pencil testing (PPT) examination method, CBT has begun to be used in various situations in Japan, such as in qualifying examinations and in the TOEFL. This paper describes the usefulness and the problems of CBT applied to a regular college examination. The regular computer literacy examinations for first year students were held using CBT, and the results were analyzed. Responses to a questionnaire indicated many students accepted CBT with no unpleasantness and considered CBT a positive factor, improving their motivation to study. CBT also decreased the work of faculty in terms of marking tests and reducing data.

  15. Analysis of airborne antenna systems using geometrical theory of diffraction and moment method computer codes

    NASA Technical Reports Server (NTRS)

    Hartenstein, Richard G., Jr.

    1985-01-01

    Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.

  16. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  17. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  18. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  19. Comparison of computer assisted surgery with conventional technique for treatment of abaxial distal phalanx fractures in horses: an in vitro study.

    PubMed

    Rossol, Melanie; Gygax, Diego; Andritzky-Waas, Juliane; Zheng, Guoyan; Lischer, Christoph J; Zhang, Xuan; Auer, Joerg A

    2008-01-01

    To (1) evaluate and compare computer-assisted surgery (CAS) with conventional screw insertion (conventional osteosynthesis [COS]) for treatment of equine abaxial distal phalanx fractures; (2) compare planned screw position with actual postoperative position; and (3) determine preferred screw insertion direction. Experimental study. Cadaveric equine limbs (n=32). In 8 specimens each, a 4.5 mm cortex bone screw was inserted in lag fashion in dorsopalmar (plantar) direction using CAS or COS. In 2 other groups of 8, the screws were inserted in opposite direction. Precision of CAS was determined by comparison of planned and actual screw position. Preferred screw direction was also assessed for CAS and COS. In 4 of 6 direct comparisons, screw positioning was significantly better with CAS. Results of precision analysis for screw position were similar to studies published in human medicine. None of evaluated criteria identified a preferred direction for screw insertion. For abaxial fractures of the distal phalanx, superior precision in screw position is achieved with CAS technique compared with COS technique. Abaxial fractures of the distal phalanx lend themselves to computer-assisted implantation of 1 screw in a dorsopalmar (plantar) direction. Because of the complex anatomic relationships, and our results, we discourage use of COS technique for repair of this fracture type.

  20. Investigations on the sensitivity of the computer code TURBO-2D

    NASA Astrophysics Data System (ADS)

    Amon, B.

    1994-12-01

    The two-dimensional computer model TURBO-2D for the calculation of two-phase flow was used to calculate the cold injection of fuel into a model chamber. Investigations of the influence of the input parameter on its sensitivity relative to the obtained results were made. In addition to that calculations were performed and compared using experimental injection pressure data and corresponding averaged injection parameter.

Top