Flexible Launch Vehicle Stability Analysis Using Steady and Unsteady Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
Launch vehicles frequently experience a reduced stability margin through the transonic Mach number range. This reduced stability margin can be caused by the aerodynamic undamping one of the lower-frequency flexible or rigid body modes. Analysis of the behavior of a flexible vehicle is routinely performed with quasi-steady aerodynamic line loads derived from steady rigid aerodynamics. However, a quasi-steady aeroelastic stability analysis can be unconservative at the critical Mach numbers, where experiment or unsteady computational aeroelastic analysis show a reduced or even negative aerodynamic damping.Amethod of enhancing the quasi-steady aeroelastic stability analysis of a launch vehicle with unsteady aerodynamics is developed that uses unsteady computational fluid dynamics to compute the response of selected lower-frequency modes. The response is contained in a time history of the vehicle line loads. A proper orthogonal decomposition of the unsteady aerodynamic line-load response is used to reduce the scale of data volume and system identification is used to derive the aerodynamic stiffness, damping, and mass matrices. The results are compared with the damping and frequency computed from unsteady computational aeroelasticity and from a quasi-steady analysis. The results show that incorporating unsteady aerodynamics in this way brings the enhanced quasi-steady aeroelastic stability analysis into close agreement with the unsteady computational aeroelastic results.
Modern Computational Techniques for the HMMER Sequence Analysis
2013-01-01
This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944
Computer Analysis of Air Pollution from Highways, Streets, and Complex Interchanges
DOT National Transportation Integrated Search
1974-03-01
A detailed computer analysis of air quality for a complex highway interchange was prepared, using an in-house version of the Environmental Protection Agency's Gaussian Highway Line Source Model. This analysis showed that the levels of air pollution n...
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.
Computational Analysis of Mine Blast on a Commercial Vehicle Structure
2007-01-01
ANALYSIS OF MINE BLAST ON A COMMERCIAL VEHICLE STRUCTURE M. Grujicic 1∗ , B. Pandurangan 1 , I. Haque 1 , B. A. Cheeseman 2 , W. N. Roy 2 and R. R. Skaggs...buried in (either dry or saturated sand) underneath the vehicle’s front right wheel is analyzed computationally. The computational analysis included the...A frequency analysis of the pressure versus time signals and visual observation clearly show the differences in the blast loads resulting from the
ERIC Educational Resources Information Center
Moore, John W., Ed.
1987-01-01
Included are two articles related to the use of computers. One activity is a computer exercise in chemical reaction engineering and applied kinetics for undergraduate college students. The second article shows how computer-assisted analysis can be used with reaction rate data. (RH)
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO
NASA Technical Reports Server (NTRS)
Stallworth, R.; Meyers, C. A.; Stinson, H. C.
1989-01-01
Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.
Vision-Based UAV Flight Control and Obstacle Avoidance
2006-01-01
denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion
VIC: A Computer Analysis of Verbal Interaction Category Systems.
ERIC Educational Resources Information Center
Kline, John A.; And Others
VIC is a computer program for the analysis of verbal interaction category systems, especially the Flanders interaction analysis system. The observer codes verbal behavior on coding sheets for later machine scoring. A matrix is produced by the program showing the number and percentages of times that a particular cell describes classroom behavior.…
New Computational Methods for the Prediction and Analysis of Helicopter Noise
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.
Generic Hypersonic Inlet Module Analysis
NASA Technical Reports Server (NTRS)
Cockrell, Chares E., Jr.; Huebner, Lawrence D.
2004-01-01
A computational study associated with an internal inlet drag analysis was performed for a generic hypersonic inlet module. The purpose of this study was to determine the feasibility of computing the internal drag force for a generic scramjet engine module using computational methods. The computational study consisted of obtaining two-dimensional (2D) and three-dimensional (3D) computational fluid dynamics (CFD) solutions using the Euler and parabolized Navier-Stokes (PNS) equations. The solution accuracy was assessed by comparisons with experimental pitot pressure data. The CFD analysis indicates that the 3D PNS solutions show the best agreement with experimental pitot pressure data. The internal inlet drag analysis consisted of obtaining drag force predictions based on experimental data and 3D CFD solutions. A comparative assessment of each of the drag prediction methods is made and the sensitivity of CFD drag values to computational procedures is documented. The analysis indicates that the CFD drag predictions are highly sensitive to the computational procedure used.
Computer analysis of railcar vibrations
NASA Technical Reports Server (NTRS)
Vlaminck, R. R.
1975-01-01
Computer models and techniques for calculating railcar vibrations are discussed along with criteria for vehicle ride optimization. The effect on vibration of car body structural dynamics, suspension system parameters, vehicle geometry, and wheel and rail excitation are presented. Ride quality vibration data collected on the state-of-the-art car and standard light rail vehicle is compared to computer predictions. The results show that computer analysis of the vehicle can be performed for relatively low cost in short periods of time. The analysis permits optimization of the design as it progresses and minimizes the possibility of excessive vibration on production vehicles.
Analysis of Biosignals During Immersion in Computer Games.
Yeo, Mina; Lim, Seokbeen; Yoon, Gilwon
2017-11-17
The number of computer game users is increasing as computers and various IT devices in connection with the Internet are commonplace in all ages. In this research, in order to find the relevance of behavioral activity and its associated biosignal, biosignal changes before and after as well as during computer games were measured and analyzed for 31 subjects. For this purpose, a device to measure electrocardiogram, photoplethysmogram and skin temperature was developed such that the effect of motion artifacts could be minimized. The device was made wearable for convenient measurement. The game selected for the experiments was League of Legends™. Analysis on the pulse transit time, heart rate variability and skin temperature showed increased sympathetic nerve activities during computer game, while the parasympathetic nerves became less active. Interestingly, the sympathetic predominance group showed less change in the heart rate variability as compared to the normal group. The results can be valuable for studying internet gaming disorder.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2011-01-01
Launch vehicles frequently experience a reduced stability margin through the transonic Mach number range. This reduced stability margin is caused by an undamping of the aerodynamics in one of the lower frequency flexible or rigid body modes. Analysis of the behavior of a flexible vehicle is routinely performed with quasi-steady aerodynamic lineloads derived from steady rigid computational fluid dynamics (CFD). However, a quasi-steady aeroelastic stability analysis can be unconservative at the critical Mach numbers where experiment or unsteady computational aeroelastic (CAE) analysis show a reduced or even negative aerodynamic damping. This paper will present a method of enhancing the quasi-steady aeroelastic stability analysis of a launch vehicle with unsteady aerodynamics. The enhanced formulation uses unsteady CFD to compute the response of selected lower frequency modes. The response is contained in a time history of the vehicle lineloads. A proper orthogonal decomposition of the unsteady aerodynamic lineload response is used to reduce the scale of data volume and system identification is used to derive the aerodynamic stiffness, damping and mass matrices. The results of the enhanced quasi-static aeroelastic stability analysis are compared with the damping and frequency computed from unsteady CAE analysis and from a quasi-steady analysis. The results show that incorporating unsteady aerodynamics in this way brings the enhanced quasi-steady aeroelastic stability analysis into close agreement with the unsteady CAE analysis.
A Computational Approach to Qualitative Analysis in Large Textual Datasets
Evans, Michael S.
2014-01-01
In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern. PMID:24498398
ERIC Educational Resources Information Center
Lourey, Eugene D., Comp.
The Minnesota Computer Aided Library System (MCALS) provides a basis of unification for library service program development in Minnesota for eventual linkage to the national information network. A prototype plan for communications functions is illustrated. A cost/benefits analysis was made to show the cost/effectiveness potential for MCALS. System…
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Shamir, Lior; Yerby, Carol; Simpson, Robert; von Benda-Beckmann, Alexander M; Tyack, Peter; Samarra, Filipa; Miller, Patrick; Wallin, John
2014-02-01
Vocal communication is a primary communication method of killer and pilot whales, and is used for transmitting a broad range of messages and information for short and long distance. The large variation in call types of these species makes it challenging to categorize them. In this study, sounds recorded by audio sensors carried by ten killer whales and eight pilot whales close to the coasts of Norway, Iceland, and the Bahamas were analyzed using computer methods and citizen scientists as part of the Whale FM project. Results show that the computer analysis automatically separated the killer whales into Icelandic and Norwegian whales, and the pilot whales were separated into Norwegian long-finned and Bahamas short-finned pilot whales, showing that at least some whales from these two locations have different acoustic repertoires that can be sensed by the computer analysis. The citizen science analysis was also able to separate the whales to locations by their sounds, but the separation was somewhat less accurate compared to the computer method.
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
Offodile, Anaeze C; Chatterjee, Abhishek; Vallejo, Sergio; Fisher, Carla S; Tchou, Julia C; Guo, Lifei
2015-04-01
Computed tomographic angiography is a diagnostic tool increasingly used for preoperative vascular mapping in abdomen-based perforator flap breast reconstruction. This study compared the use of computed tomographic angiography and the conventional practice of Doppler ultrasonography only in postmastectomy reconstruction using a cost-utility model. Following a comprehensive literature review, a decision analytic model was created using the three most clinically relevant health outcomes in free autologous breast reconstruction with computed tomographic angiography versus Doppler ultrasonography only. Cost and utility estimates for each health outcome were used to derive the quality-adjusted life-years and incremental cost-utility ratio. One-way sensitivity analysis was performed to scrutinize the robustness of the authors' results. Six studies and 782 patients were identified. Cost-utility analysis revealed a baseline cost savings of $3179, a gain in quality-adjusted life-years of 0.25. This yielded an incremental cost-utility ratio of -$12,716, implying a dominant choice favoring preoperative computed tomographic angiography. Sensitivity analysis revealed that computed tomographic angiography was costlier when the operative time difference between the two techniques was less than 21.3 minutes. However, the clinical advantage of computed tomographic angiography over Doppler ultrasonography only showed that computed tomographic angiography would still remain the cost-effective option even if it offered no additional operating time advantage. The authors' results show that computed tomographic angiography is a cost-effective technology for identifying lower abdominal perforators for autologous breast reconstruction. Although the perfect study would be a randomized controlled trial of the two approaches with true cost accrual, the authors' results represent the best available evidence.
NASA Technical Reports Server (NTRS)
Mangalgiri, P. D.; Prabhakaran, R.
1986-01-01
An algorithm for vectorized computation of stiffness matrices of an 8 noded isoparametric hexahedron element for geometric nonlinear analysis was developed. This was used in conjunction with the earlier 2-D program GAMNAS to develop the new program NAS3D for geometric nonlinear analysis. A conventional, modified Newton-Raphson process is used for the nonlinear analysis. New schemes for the computation of stiffness and strain energy release rates is presented. The organization the program is explained and some results on four sample problems are given. The study of CPU times showed that savings by a factor of 11 to 13 were achieved when vectorized computation was used for the stiffness instead of the conventional scalar one. Finally, the scheme of inputting data is explained.
Stress Analysis and Fracture in Nanolaminate Composites
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A stress analysis is performed on a nanolaminate subjected to bending. A composite mechanics computer code that is based on constituent properties and nanoelement formulation is used to evaluate the nanolaminate stresses. The results indicate that the computer code is sufficient for the analysis. The results also show that when a stress concentration is present, the nanolaminate stresses exceed their corresponding matrix-dominated strengths and the nanofiber fracture strength.
Bayesian Latent Class Analysis Tutorial.
Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca
2018-01-01
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.
Effects of Computer Programming on Students' Cognitive Performance: A Quantitative Synthesis.
ERIC Educational Resources Information Center
Liao, Yuen-Kuang Cliff
A meta-analysis was performed to synthesize existing data concerning the effects of computer programing on cognitive outcomes of students. Sixty-five studies were located from three sources, and their quantitative data were transformed into a common scale--Effect Size (ES). The analysis showed that 58 (89%) of the study-weighted ESs were positive…
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
D-region blunt probe data analysis using hybrid computer techniques
NASA Technical Reports Server (NTRS)
Burkhard, W. J.
1973-01-01
The feasibility of performing data reduction techniques with a hybrid computer was studied. The data was obtained from the flight of a parachute born probe through the D-region of the ionosphere. A presentation of the theory of blunt probe operation is included with emphasis on the equations necessary to perform the analysis. This is followed by a discussion of computer program development. Included in this discussion is a comparison of computer and hand reduction results for the blunt probe launched on 31 January 1972. The comparison showed that it was both feasible and desirable to use the computer for data reduction. The results of computer data reduction performed on flight data acquired from five blunt probes are also presented.
Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen J.; Gati, Frank; Yuko, James R.; Motil, Brian J.; Lumpkin, Forrest E.
2009-01-01
The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module showed that thermal protection is necessary because of significant heating from the plume.
Computational simulation of matrix micro-slip bands in SiC/Ti-15 composite
NASA Technical Reports Server (NTRS)
Mital, S. K.; Lee, H.-J.; Murthy, P. L. N.; Chamis, C. C.
1992-01-01
Computational simulation procedures are used to identify the key deformation mechanisms for (0)(sub 8) and (90)(sub 8) SiC/Ti-15 metal matrix composites. The computational simulation procedures employed consist of a three-dimensional finite-element analysis and a micromechanics based computer code METCAN. The interphase properties used in the analysis have been calibrated using the METCAN computer code with the (90)(sub 8) experimental stress-strain curve. Results of simulation show that although shear stresses are sufficiently high to cause the formation of some slip bands in the matrix concentrated mostly near the fibers, the nonlinearity in the composite stress-strain curve in the case of (90)(sub 8) composite is dominated by interfacial damage, such as microcracks and debonding rather than microplasticity. The stress-strain curve for (0)(sub 8) composite is largely controlled by the fibers and shows only slight nonlinearity at higher strain levels that could be the result of matrix microplasticity.
Computer-Based Instruction and Health Professions Education: A Meta-Analysis of Outcomes.
ERIC Educational Resources Information Center
Cohen, Peter A.; Dacanay, Lakshmi S.
1992-01-01
The meta-analytic techniques of G. V. Glass were used to statistically integrate findings from 47 comparative studies on computer-based instruction (CBI) in health professions education. A clear majority of the studies favored CBI over conventional methods of instruction. Results show higher-order applications of computers to be especially…
A Survey of Computer Use by Undergraduate Psychology Departments in Virginia.
ERIC Educational Resources Information Center
Stoloff, Michael L.; Couch, James V.
1987-01-01
Reports a survey of computer use in psychology departments in Virginia's four year colleges. Results showed that faculty, students, and clerical staff used word processing, statistical analysis, and database management most frequently. The three most numerous computers brands were the Apple II family, IBM PCs, and the Apple Macintosh. (Author/JDH)
Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser
NASA Astrophysics Data System (ADS)
Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.
2012-09-01
Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).
Multi-Scale Surface Descriptors
Cipriano, Gregory; Phillips, George N.; Gleicher, Michael
2010-01-01
Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190
Comparing DNA damage-processing pathways by computer analysis of chromosome painting data.
Levy, Dan; Vazquez, Mariel; Cornforth, Michael; Loucas, Bradford; Sachs, Rainer K; Arsuaga, Javier
2004-01-01
Chromosome aberrations are large-scale illegitimate rearrangements of the genome. They are indicative of DNA damage and informative about damage processing pathways. Despite extensive investigations over many years, the mechanisms underlying aberration formation remain controversial. New experimental assays such as multiplex fluorescent in situ hybridyzation (mFISH) allow combinatorial "painting" of chromosomes and are promising for elucidating aberration formation mechanisms. Recently observed mFISH aberration patterns are so complex that computer and graph-theoretical methods are needed for their full analysis. An important part of the analysis is decomposing a chromosome rearrangement process into "cycles." A cycle of order n, characterized formally by the cyclic graph with 2n vertices, indicates that n chromatin breaks take part in a single irreducible reaction. We here describe algorithms for computing cycle structures from experimentally observed or computer-simulated mFISH aberration patterns. We show that analyzing cycles quantitatively can distinguish between different aberration formation mechanisms. In particular, we show that homology-based mechanisms do not generate the large number of complex aberrations, involving higher-order cycles, observed in irradiated human lymphocytes.
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.; Jones, Scott M.
1991-01-01
This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Computational chemistry and aeroassisted orbital transfer vehicles
NASA Technical Reports Server (NTRS)
Cooper, D. M.; Jaffe, R. L.; Arnold, J. O.
1985-01-01
An analysis of the radiative heating phenomena encountered during a typical aeroassisted orbital transfer vehicle (AOTV) trajectory was made to determine the potential impact of computational chemistry on AOTV design technology. Both equilibrium and nonequilibrium radiation mechanisms were considered. This analysis showed that computational chemistry can be used to predict (1) radiative intensity factors and spectroscopic data; (2) the excitation rates of both atoms and molecules; (3) high-temperature reaction rate constants for metathesis and charge exchange reactions; (4) particle ionization and neutralization rates and cross sections; and (5) spectral line widths.
Modeling the state dependent impulse control for computer virus propagation under media coverage
NASA Astrophysics Data System (ADS)
Liang, Xiyin; Pei, Yongzhen; Lv, Yunfei
2018-02-01
A state dependent impulsive control model is proposed to model the spread of computer virus incorporating media coverage. By the successor function, the sufficient conditions for the existence and uniqueness of order-1 periodic solution are presented first. Secondly, for two classes of periodic solutions, the geometric property of successor function and the analogue of the Poincaré criterion are employed to obtain the stability results. These results show that the number of the infective computers is under the threshold all the time. Finally, the theoretic and numerical analysis show that media coverage can delay the spread of computer virus.
Surface Curvatures Computation from Equidistance Contours
NASA Astrophysics Data System (ADS)
Tanaka, Hiromi T.; Kling, Olivier; Lee, Daniel T. L.
1990-03-01
The subject of our research is on the 3D shape representation problem for a special class of range image, one where the natural mode of the acquired range data is in the form of equidistance contours, as exemplified by a moire interferometry range system. In this paper we present a novel surface curvature computation scheme that directly computes the surface curvatures (the principal curvatures, Gaussian curvature and mean curvature) from the equidistance contours without any explicit computations or implicit estimates of partial derivatives. We show how the special nature of the equidistance contours, specifically, the dense information of the surface curves in the 2D contour plane, turns into an advantage for the computation of the surface curvatures. The approach is based on using simple geometric construction to obtain the normal sections and the normal curvatures. This method is general and can be extended to any dense range image data. We show in details how this computation is formulated and give an analysis on the error bounds of the computation steps showing that the method is stable. Computation results on real equidistance range contours are also shown.
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
[Preoperative CT Scan in middle ear cholesteatoma].
Sethom, Anissa; Akkari, Khemaies; Dridi, Inès; Tmimi, S; Mardassi, Ali; Benzarti, Sonia; Miled, Imed; Chebbi, Mohamed Kamel
2011-03-01
To compare preoperative CT scan finding and per-operative lesions in patients operated for middle ear cholesteatoma, A retrospective study including 60 patients with cholesteatoma otitis diagnosed and treated within a period of 5 years, from 2001 to 2005, at ENT department of Military Hospital of Tunis. All patients had computed tomography of the middle and inner ear. High resolution CT scan imaging was performed using millimetric incidences (3 to 5 millimetres). All patients had surgical removal of their cholesteatoma using down wall technic. We evaluated sensitivity, specificity and predictive value of CT-scan comparing otitic damages and CT finding, in order to examine the real contribution of computed tomography in cholesteatoma otitis. CT scan analysis of middle ear bone structures shows satisfaction (with 83% of sensibility). The rate of sensibility decrease (63%) for the tympanic raff. Predictive value of CT scan for the diagnosis of cholesteatoma was low. However, we have noticed an excellent sensibility in the analysis of ossicular damages (90%). Comparative frontal incidence seems to be less sensible for the detection of facial nerve lesions (42%). But when evident on CT scan findings, lesions of facial nerve were usually observed preoperatively (spécificity 78%). Predictive value of computed tomography for the diagnosis of perilymphatic fistulae (FL) was low. In fact, CT scan imaging have showed FL only for four patients among eight. Best results can be obtained if using inframillimetric incidences with performed high resolution computed tomography. Preoperative computed tomography is necessary for the diagnosis and the evaluation of chronic middle ear cholesteatoma in order to show extending lesion and to detect complications. This CT analysis and surgical correlation have showed that sensibility, specificity and predictive value of CT-scan depend on the anatomic structure implicated in cholesteatoma damages.
Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Lumpkin, Forrest E., III; Gati, Frank; Yuko, James R.; Motil, Brian J.
2009-01-01
The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module was performed using MSC Patran/Pthermal. The obtained temperature results showed that thermal protection is necessary because of significant heating from the plume.
Development of small scale cluster computer for numerical analysis
NASA Astrophysics Data System (ADS)
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
An Improved Version of the NASA-Lockheed Multielement Airfoil Analysis Computer Program
NASA Technical Reports Server (NTRS)
Brune, G. W.; Manke, J. W.
1978-01-01
An improved version of the NASA-Lockheed computer program for the analysis of multielement airfoils is described. The predictions of the program are evaluated by comparison with recent experimental high lift data including lift, pitching moment, profile drag, and detailed distributions of surface pressures and boundary layer parameters. The results of the evaluation show that the contract objectives of improving program reliability and accuracy have been met.
Tertiary structure-based analysis of microRNA–target interactions
Gan, Hin Hark; Gunsalus, Kristin C.
2013-01-01
Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin
2015-01-15
Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin
2014-01-01
Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633
Ludovini, Vienna; Bianconi, Fortunato; Siggillino, Annamaria; Piobbico, Danilo; Vannucci, Jacopo; Metro, Giulio; Chiari, Rita; Bellezza, Guido; Puma, Francesco; Della Fazia, Maria Agnese; Servillo, Giuseppe; Crinò, Lucio
2016-05-24
Risk assessment and treatment choice remains a challenge in early non-small-cell lung cancer (NSCLC). The aim of this study was to identify novel genes involved in the risk of early relapse (ER) compared to no relapse (NR) in resected lung adenocarcinoma (AD) patients using a combination of high throughput technology and computational analysis. We identified 18 patients (n.13 NR and n.5 ER) with stage I AD. Frozen samples of patients in ER, NR and corresponding normal lung (NL) were subjected to Microarray technology and quantitative-PCR (Q-PCR). A gene network computational analysis was performed to select predictive genes. An independent set of 79 ADs stage I samples was used to validate selected genes by Q-PCR.From microarray analysis we selected 50 genes, using the fold change ratio of ER versus NR. They were validated both in pool and individually in patient samples (ER and NR) by Q-PCR. Fourteen increased and 25 decreased genes showed a concordance between two methods. They were used to perform a computational gene network analysis that identified 4 increased (HOXA10, CLCA2, AKR1B10, FABP3) and 6 decreased (SCGB1A1, PGC, TFF1, PSCA, SPRR1B and PRSS1) genes. Moreover, in an independent dataset of ADs samples, we showed that both high FABP3 expression and low SCGB1A1 expression was associated with a worse disease-free survival (DFS).Our results indicate that it is possible to define, through gene expression and computational analysis, a characteristic gene profiling of patients with an increased risk of relapse that may become a tool for patient selection for adjuvant therapy.
1/f Noise in the ``Game of Life''
NASA Astrophysics Data System (ADS)
Andrecut, Mircea
Conway's celebrated ``game of life'' cellular automaton possesses computational universality. The Fourier analysis reported here shows that the power spectra of the ``game of life'' exhibit 1/f noise. The obtained result suggests a connection between 1/f noise and computational universality.
Large Data at Small Universities: Astronomical processing using a computer classroom
NASA Astrophysics Data System (ADS)
Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen
2016-06-01
The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.
NASA Technical Reports Server (NTRS)
1990-01-01
Structural Reliability Consultants' computer program creates graphic plots showing the statistical parameters of glue laminated timbers, or 'glulam.' The company president, Dr. Joseph Murphy, read in NASA Tech Briefs about work related to analysis of Space Shuttle surface tile strength performed for Johnson Space Center by Rockwell International Corporation. Analysis led to a theory of 'consistent tolerance bounds' for statistical distributions, applicable in industrial testing where statistical analysis can influence product development and use. Dr. Murphy then obtained the Tech Support Package that covers the subject in greater detail. The TSP became the basis for Dr. Murphy's computer program PC-DATA, which he is marketing commercially.
The Correlation of Active and Passive Microwave Outputs for the Skylab S-193 Sensor
NASA Technical Reports Server (NTRS)
Krishen, K.
1976-01-01
This paper presents the results of the correlation analysis of the Skylab S-193 13.9 GHz Radiometer/Scatterometer data. Computer analysis of the S-193 data shows more than 50 percent of the radiometer and scatterometer data are uncorrelated. The correlation coefficients computed for the data gathered over various ground scenes indicates the desirability of using both active and passive sensors for the determination of various Earth phenomena.
Reading Emotion From Mouse Cursor Motions: Affective Computing Approach.
Yamauchi, Takashi; Xiao, Kunchen
2018-04-01
Affective computing research has advanced emotion recognition systems using facial expressions, voices, gaits, and physiological signals, yet these methods are often impractical. This study integrates mouse cursor motion analysis into affective computing and investigates the idea that movements of the computer cursor can provide information about emotion of the computer user. We extracted 16-26 trajectory features during a choice-reaching task and examined the link between emotion and cursor motions. Participants were induced for positive or negative emotions by music, film clips, or emotional pictures, and they indicated their emotions with questionnaires. Our 10-fold cross-validation analysis shows that statistical models formed from "known" participants (training data) could predict nearly 10%-20% of the variance of positive affect and attentiveness ratings of "unknown" participants, suggesting that cursor movement patterns such as the area under curve and direction change help infer emotions of computer users. Copyright © 2017 Cognitive Science Society, Inc.
A computer-aided movement analysis system.
Fioretti, S; Leo, T; Pisani, E; Corradini, M L
1990-08-01
Interaction with biomechanical data concerning human movement analysis implies the adoption of various experimental equipments and the choice of suitable models, data processing, and graphical data restitution techniques. The integration of measurement setups with the associated experimental protocols and the relative software procedures constitutes a computer-aided movement analysis (CAMA) system. In the present paper such integration is mapped onto the causes that limit the clinical acceptance of movement analysis methods. The structure of the system is presented. A specific CAMA system devoted to posture analysis is described in order to show the attainable features. Scientific results obtained with the support of the described system are also reported.
Tabe-Bordbar, Shayan; Marashi, Sayed-Amir
2013-12-01
Elementary modes (EMs) are steady-state metabolic flux vectors with minimal set of active reactions. Each EM corresponds to a metabolic pathway. Therefore, studying EMs is helpful for analyzing the production of biotechnologically important metabolites. However, memory requirements for computing EMs may hamper their applicability as, in most genome-scale metabolic models, no EM can be computed due to running out of memory. In this study, we present a method for computing randomly sampled EMs. In this approach, a network reduction algorithm is used for EM computation, which is based on flux balance-based methods. We show that this approach can be used to recover the EMs in the medium- and genome-scale metabolic network models, while the EMs are sampled in an unbiased way. The applicability of such results is shown by computing “estimated” control-effective flux values in Escherichia coli metabolic network.
Hardware accelerated high performance neutron transport computation based on AGENT methodology
NASA Astrophysics Data System (ADS)
Xiao, Shanjie
The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.
Airfoil Vibration Dampers program
NASA Technical Reports Server (NTRS)
Cook, Robert M.
1991-01-01
The Airfoil Vibration Damper program has consisted of an analysis phase and a testing phase. During the analysis phase, a state-of-the-art computer code was developed, which can be used to guide designers in the placement and sizing of friction dampers. The use of this computer code was demonstrated by performing representative analyses on turbine blades from the High Pressure Oxidizer Turbopump (HPOTP) and High Pressure Fuel Turbopump (HPFTP) of the Space Shuttle Main Engine (SSME). The testing phase of the program consisted of performing friction damping tests on two different cantilever beams. Data from these tests provided an empirical check on the accuracy of the computer code developed in the analysis phase. Results of the analysis and testing showed that the computer code can accurately predict the performance of friction dampers. In addition, a valuable set of friction damping data was generated, which can be used to aid in the design of friction dampers, as well as provide benchmark test cases for future code developers.
Analog computation of auto and cross-correlation functions
NASA Technical Reports Server (NTRS)
1974-01-01
For analysis of the data obtained from the cross beam systems it was deemed desirable to compute the auto- and cross-correlation functions by both digital and analog methods to provide a cross-check of the analysis methods and an indication as to which of the two methods would be most suitable for routine use in the analysis of such data. It is the purpose of this appendix to provide a concise description of the equipment and procedures used for the electronic analog analysis of the cross beam data. A block diagram showing the signal processing and computation set-up used for most of the analog data analysis is provided. The data obtained at the field test sites were recorded on magnetic tape using wide-band FM recording techniques. The data as recorded were band-pass filtered by electronic signal processing in the data acquisition systems.
A generic, cost-effective, and scalable cell lineage analysis platform
Biezuner, Tamir; Spiro, Adam; Raz, Ofir; Amir, Shiran; Milo, Lilach; Adar, Rivka; Chapal-Ilani, Noa; Berman, Veronika; Fried, Yael; Ainbinder, Elena; Cohen, Galit; Barr, Haim M.; Halaban, Ruth; Shapiro, Ehud
2016-01-01
Advances in single-cell genomics enable commensurate improvements in methods for uncovering lineage relations among individual cells. Current sequencing-based methods for cell lineage analysis depend on low-resolution bulk analysis or rely on extensive single-cell sequencing, which is not scalable and could be biased by functional dependencies. Here we show an integrated biochemical-computational platform for generic single-cell lineage analysis that is retrospective, cost-effective, and scalable. It consists of a biochemical-computational pipeline that inputs individual cells, produces targeted single-cell sequencing data, and uses it to generate a lineage tree of the input cells. We validated the platform by applying it to cells sampled from an ex vivo grown tree and analyzed its feasibility landscape by computer simulations. We conclude that the platform may serve as a generic tool for lineage analysis and thus pave the way toward large-scale human cell lineage discovery. PMID:27558250
Computer-Assisted Analysis of Written Language: Assessing the Written Language of Deaf Children, II.
ERIC Educational Resources Information Center
Parkhurst, Barbara G.; MacEachron, Marion P.
1980-01-01
Two pilot studies investigated the accuracy of a computer parsing system for analyzing written language of deaf children. Results of the studies showed good agreement between human and machine raters. Journal availability: Elsevier North Holland, Inc., 52 Vanderbilt Avenue, New York, NY 10017. (Author)
Capability of GPGPU for Faster Thermal Analysis Used in Data Assimilation
NASA Astrophysics Data System (ADS)
Takaki, Ryoji; Akita, Takeshi; Shima, Eiji
A thermal mathematical model plays an important role in operations on orbit as well as spacecraft thermal designs. The thermal mathematical model has some uncertain thermal characteristic parameters, such as thermal contact resistances between components, effective emittances of multilayer insulation (MLI) blankets, discouraging make up efficiency and accuracy of the model. A particle filter which is one of successive data assimilation methods has been applied to construct spacecraft thermal mathematical models. This method conducts a lot of ensemble computations, which require large computational power. Recently, General Purpose computing in Graphics Processing Unit (GPGPU) has been attracted attention in high performance computing. Therefore GPGPU is applied to increase the computational speed of thermal analysis used in the particle filter. This paper shows the speed-up results by using GPGPU as well as the application method of GPGPU.
Image analysis and modeling in medical image computing. Recent developments and advances.
Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T
2012-01-01
Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.
Integrative prescreening in analysis of multiple cancer genomic studies
2012-01-01
Background In high throughput cancer genomic studies, results from the analysis of single datasets often suffer from a lack of reproducibility because of small sample sizes. Integrative analysis can effectively pool and analyze multiple datasets and provides a cost effective way to improve reproducibility. In integrative analysis, simultaneously analyzing all genes profiled may incur high computational cost. A computationally affordable remedy is prescreening, which fits marginal models, can be conducted in a parallel manner, and has low computational cost. Results An integrative prescreening approach is developed for the analysis of multiple cancer genomic datasets. Simulation shows that the proposed integrative prescreening has better performance than alternatives, particularly including prescreening with individual datasets, an intensity approach and meta-analysis. We also analyze multiple microarray gene profiling studies on liver and pancreatic cancers using the proposed approach. Conclusions The proposed integrative prescreening provides an effective way to reduce the dimensionality in cancer genomic studies. It can be coupled with existing analysis methods to identify cancer markers. PMID:22799431
Static aeroelastic analysis and tailoring of a single-element racing car wing
NASA Astrophysics Data System (ADS)
Sadd, Christopher James
This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.
Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars
2015-10-01
A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Aerodynamic analysis of Pegasus - Computations vs reality
NASA Technical Reports Server (NTRS)
Mendenhall, Michael R.; Lesieutre, Daniel J.; Whittaker, C. H.; Curry, Robert E.; Moulton, Bryan
1993-01-01
Pegasus, a three-stage, air-launched, winged space booster was developed to provide fast and efficient commercial launch services for small satellites. The aerodynamic design and analysis of Pegasus was conducted without benefit of wind tunnel tests using only computational aerodynamic and fluid dynamic methods. Flight test data from the first two operational flights of Pegasus are now available, and they provide an opportunity to validate the accuracy of the predicted pre-flight aerodynamic characteristics. Comparisons of measured and predicted flight characteristics are presented and discussed. Results show that the computational methods provide reasonable aerodynamic design information with acceptable margins. Post-flight analyses illustrate certain areas in which improvements are desired.
NASA Technical Reports Server (NTRS)
Clancey, William J.
2003-01-01
A human-centered approach to computer systems design involves reframing analysis in terms of people interacting with each other, not only human-machine interaction. The primary concern is not how people can interact with computers, but how shall we design computers to help people work together? An analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse.
Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application
NASA Astrophysics Data System (ADS)
Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.
2013-12-01
The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
Sex estimation from sternal measurements using multidetector computed tomography.
Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur
2014-12-01
We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation.Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30-60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation.
Pusic, Martin V.; LeBlanc, Vicki; Patel, Vimla L.
2001-01-01
Traditional task analysis for instructional design has emphasized the importance of precisely defining behavioral educational objectives and working back to select objective-appropriate instructional strategies. However, this approach may miss effective strategies. Cognitive task analysis, on the other hand, breaks a process down into its component knowledge representations. Selection of instructional strategies based on all such representations in a domain is likely to lead to optimal instructional design. In this demonstration, using the interpretation of cervical spine x-rays as an educational example, we show how a detailed cognitive task analysis can guide the development of computer-aided instruction.
Yamane, Luciana Harue; de Moraes, Viviane Tavares; Espinosa, Denise Crocce Romano; Tenório, Jorge Alberto Soares
2011-12-01
This paper presents a comparison between printed circuit boards from computers and mobile phones. Since printed circuits boards are becoming more complex and smaller, the amount of materials is constantly changing. The main objective of this work was to characterize spent printed circuit boards from computers and mobile phones applying mineral processing technique to separate the metal, ceramic, and polymer fractions. The processing was performed by comminution in a hammer mill, followed by particle size analysis, and by magnetic and electrostatic separation. Aqua regia leaching, loss-on-ignition and chemical analysis (inductively coupled plasma atomic emission spectroscopy - ICP-OES) were carried out to determine the composition of printed circuit boards and the metal rich fraction. The composition of the studied mobile phones printed circuit boards (PCB-MP) was 63 wt.% metals; 24 wt.% ceramics and 13 wt.% polymers; and of the printed circuit boards from studied personal computers (PCB-PC) was 45 wt.% metals; 27 wt.% polymers and ceramics 28 wt.% ceramics. The chemical analysis showed that copper concentration in printed circuit boards from personal computers was 20 wt.% and in printed circuit boards from mobile phones was 34.5 wt.%. According to the characteristics of each type of printed circuit board, the recovery of precious metals may be the main goal of the recycling process of printed circuit boards from personal computers and the recovery of copper should be the main goal of the recycling process of printed circuit boards from mobile phones. Hence, these printed circuit boards would not be mixed prior treatment. The results of this paper show that copper concentration is increasing in mobile phones and remaining constant in personal computers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Recycling of WEEE: Characterization of spent printed circuit boards from mobile phones and computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamane, Luciana Harue, E-mail: lucianayamane@uol.com.br; Tavares de Moraes, Viviane, E-mail: tavares.vivi@gmail.com; Crocce Romano Espinosa, Denise, E-mail: espinosa@usp.br
Highlights: > This paper presents new and important data on characterization of wastes of electric and electronic equipments. > Copper concentration is increasing in mobile phones and remaining constant in personal computers. > Printed circuit boards from mobile phones and computers would not be mixed prior treatment. - Abstract: This paper presents a comparison between printed circuit boards from computers and mobile phones. Since printed circuits boards are becoming more complex and smaller, the amount of materials is constantly changing. The main objective of this work was to characterize spent printed circuit boards from computers and mobile phones applying mineralmore » processing technique to separate the metal, ceramic, and polymer fractions. The processing was performed by comminution in a hammer mill, followed by particle size analysis, and by magnetic and electrostatic separation. Aqua regia leaching, loss-on-ignition and chemical analysis (inductively coupled plasma atomic emission spectroscopy - ICP-OES) were carried out to determine the composition of printed circuit boards and the metal rich fraction. The composition of the studied mobile phones printed circuit boards (PCB-MP) was 63 wt.% metals; 24 wt.% ceramics and 13 wt.% polymers; and of the printed circuit boards from studied personal computers (PCB-PC) was 45 wt.% metals; 27 wt.% polymers and ceramics 28 wt.% ceramics. The chemical analysis showed that copper concentration in printed circuit boards from personal computers was 20 wt.% and in printed circuit boards from mobile phones was 34.5 wt.%. According to the characteristics of each type of printed circuit board, the recovery of precious metals may be the main goal of the recycling process of printed circuit boards from personal computers and the recovery of copper should be the main goal of the recycling process of printed circuit boards from mobile phones. Hence, these printed circuit boards would not be mixed prior treatment. The results of this paper show that copper concentration is increasing in mobile phones and remaining constant in personal computers.« less
Transonic CFD applications at Boeing
NASA Technical Reports Server (NTRS)
Tinoco, E. N.
1989-01-01
The use of computational methods for three dimensional transonic flow design and analysis at the Boeing Company is presented. A range of computational tools consisting of production tools for every day use by project engineers, expert user tools for special applications by computational researchers, and an emerging tool which may see considerable use in the near future are described. These methods include full potential and Euler solvers, some coupled to three dimensional boundary layer analysis methods, for transonic flow analysis about nacelle, wing-body, wing-body-strut-nacelle, and complete aircraft configurations. As the examples presented show, such a toolbox of codes is necessary for the variety of applications typical of an industrial environment. Such a toolbox of codes makes possible aerodynamic advances not previously achievable in a timely manner, if at all.
Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs
NASA Astrophysics Data System (ADS)
De Vylder, Jonas; Philips, Wilfried
2011-02-01
This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.
Combinatorial-topological framework for the analysis of global dynamics.
Bush, Justin; Gameiro, Marcio; Harker, Shaun; Kokubu, Hiroshi; Mischaikow, Konstantin; Obayashi, Ippei; Pilarczyk, Paweł
2012-12-01
We discuss an algorithmic framework based on efficient graph algorithms and algebraic-topological computational tools. The framework is aimed at automatic computation of a database of global dynamics of a given m-parameter semidynamical system with discrete time on a bounded subset of the n-dimensional phase space. We introduce the mathematical background, which is based upon Conley's topological approach to dynamics, describe the algorithms for the analysis of the dynamics using rectangular grids both in phase space and parameter space, and show two sample applications.
Combinatorial-topological framework for the analysis of global dynamics
NASA Astrophysics Data System (ADS)
Bush, Justin; Gameiro, Marcio; Harker, Shaun; Kokubu, Hiroshi; Mischaikow, Konstantin; Obayashi, Ippei; Pilarczyk, Paweł
2012-12-01
We discuss an algorithmic framework based on efficient graph algorithms and algebraic-topological computational tools. The framework is aimed at automatic computation of a database of global dynamics of a given m-parameter semidynamical system with discrete time on a bounded subset of the n-dimensional phase space. We introduce the mathematical background, which is based upon Conley's topological approach to dynamics, describe the algorithms for the analysis of the dynamics using rectangular grids both in phase space and parameter space, and show two sample applications.
A computer program to evaluate optical systems
NASA Technical Reports Server (NTRS)
Innes, D.
1972-01-01
A computer program is used to evaluate a 25.4 cm X-ray telescope at a field angle of 20 minutes of arc by geometrical analysis. The object is regarded as a point source of electromagnetic radiation, and the optical surfaces are treated as boundary conditions in the solution of the electromagnetic wave propagation equation. The electric field distribution is then determined in the region of the image and the intensity distribution inferred. A comparison of wave analysis results and photographs taken through the telescope shows excellent agreement.
Computer Programs (Turbomachinery)
NASA Technical Reports Server (NTRS)
1978-01-01
NASA computer programs are extensively used in design of industrial equipment. Available from the Computer Software Management and Information Center (COSMIC) at the University of Georgia, these programs are employed as analysis tools in design, test and development processes, providing savings in time and money. For example, two NASA computer programs are used daily in the design of turbomachinery by Delaval Turbine Division, Trenton, New Jersey. The company uses the NASA splint interpolation routine for analysis of turbine blade vibration and the performance of compressors and condensers. A second program, the NASA print plot routine, analyzes turbine rotor response and produces graphs for project reports. The photos show examples of Delaval test operations in which the computer programs play a part. In the large photo below, a 24-inch turbine blade is undergoing test; in the smaller photo, a steam turbine rotor is being prepared for stress measurements under actual operating conditions; the "spaghetti" is wiring for test instrumentation
Real-time computation of parameter fitting and image reconstruction using graphical processing units
NASA Astrophysics Data System (ADS)
Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin
2017-06-01
In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.
Scalable Robust Principal Component Analysis Using Grassmann Averages.
Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J
2016-11-01
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Structural dynamics of shroudless, hollow fan blades with composite in-lays
NASA Technical Reports Server (NTRS)
Aiello, R. A.; Hirschbein, M. S.; Chamis, C. C.
1982-01-01
Structural and dynamic analyses are presented for a shroudless, hollow titanium fan blade proposed for future use in aircraft turbine engines. The blade was modeled and analyzed using the composite blade structural analysis computer program (COBSTRAN); an integrated program consisting of mesh generators, composite mechanics codes, NASTRAN, and pre- and post-processors. Vibration and impact analyses are presented. The vibration analysis was conducted with COBSTRAN. Results show the effect of the centrifugal force field on frequencies, twist, and blade camber. Bird impact analysis was performed with the multi-mode blade impact computer program. This program uses the geometric model and modal analysis from the COBSTRAN vibration analysis to determine the gross impact response of the fan blades to bird strikes. The structural performance of this blade is also compared to a blade of similar design but with composite in-lays on the outer surface. Results show that the composite in-lays can be selected (designed) to substantially modify the mechanical performance of the shroudless, hollow fan blade.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Cazzaniga, Paolo; Nobile, Marco S.; Besozzi, Daniela; Bellini, Matteo; Mauri, Giancarlo
2014-01-01
The introduction of general-purpose Graphics Processing Units (GPUs) is boosting scientific applications in Bioinformatics, Systems Biology, and Computational Biology. In these fields, the use of high-performance computing solutions is motivated by the need of performing large numbers of in silico analysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a large mechanistic model of the blood coagulation cascade (BCC), defined according to both mass-action kinetics and Hill functions. coagSODA allows the execution of parallel simulations of the dynamics of the BCC by automatically deriving the system of ordinary differential equations and then exploiting the numerical integration algorithm LSODA. We present the biological results achieved with a massive exploration of perturbed conditions of the BCC, carried out with one-dimensional and bi-dimensional parameter sweep analysis, and show that GPU-accelerated parallel simulations of this model can increase the computational performances up to a 181× speedup compared to the corresponding sequential simulations. PMID:25025072
Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen
2013-01-01
Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.
Computation of elementary modes: a unifying framework and the new binary approach
Gagneur, Julien; Klamt, Steffen
2004-01-01
Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509
Teeter, Matthew G; Langohr, G Daniel G; Medley, John B; Holdsworth, David W
2014-02-01
The purpose of this study was to determine the ability of micro-computed tomography to quantify wear in preclinical pin-on-plate testing of materials for use in joint arthroplasty. Wear testing of CoCr pins articulating against six polyetheretherketone plates was performed using a pin-on-plate apparatus over 2 million cycles. Change in volume due to wear was quantified with gravimetric analysis and with micro-computed tomography, and the volumes were compared. Separately, the volume of polyetheretherketone pin-on-plate specimens that had been soaking in fluid for 52 weeks was quantified with both gravimetric analysis and micro-computed tomography, and repeated after drying. The volume change with micro-computed tomography was compared to the mass change with gravimetric analysis. The mean wear volume measured was 8.02 ± 6.38 mm(3) with gravimetric analysis and 6.76 ± 5.38 mm(3) with micro-computed tomography (p = 0.06). Micro-computed tomography volume measurements did not show a statistically significant change with drying for either the plates (p = 0.60) or the pins (p = 0.09), yet drying had a significant effect on the gravimetric mass measurements for both the plates (p = 0.03) and the pins (p = 0.04). Micro-computed tomography provided accurate measurements of wear in polyetheretherketone pin-on-plate test specimens, and no statistically significant change was caused by fluid uptake. Micro-computed tomography quantifies wear depth and wear volume, mapped to the specific location of damage on the specimen, and is also capable of examining subsurface density as well as cracking. Its noncontact, nondestructive nature makes it ideal for preclinical testing of materials, in which further additional analysis techniques may be utilized.
Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.
Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T
2017-07-01
Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
An analytical procedure for evaluating shuttle abort staging aerodynamic characteristics
NASA Technical Reports Server (NTRS)
Meyer, R.
1973-01-01
An engineering analysis and computer code (AERSEP) for predicting Space Shuttle Orbiter - HO Tank longitudinal aerodynamic characteristics during abort separation has been developed. Computed results are applicable at Mach numbers above 2 for angle-of-attack between plus or minus 10 degrees. No practical restrictions on orbiter-tank relative positioning are indicated for tank-under-orbiter configurations. Input data requirements and computer running times are minimal facilitating program use for parametric studies, test planning, and trajectory analysis. In a majority of cases AERSEP Orbiter-Tank interference predictions are as accurate as state-of-the-art estimates for interference-free or isolated-vehicle configurations. AERSEP isolated-orbiter predictions also show excellent correlation with data.
Wee, Leonard; Hackett, Sara Lyons; Jones, Andrew; Lim, Tee Sin; Harper, Christopher Stirling
2013-01-01
This study evaluated the agreement of fiducial marker localization between two modalities — an electronic portal imaging device (EPID) and cone‐beam computed tomography (CBCT) — using a low‐dose, half‐rotation scanning protocol. Twenty‐five prostate cancer patients with implanted fiducial markers were enrolled. Before each daily treatment, EPID and half‐rotation CBCT images were acquired. Translational shifts were computed for each modality and two marker‐matching algorithms, seed‐chamfer and grey‐value, were performed for each set of CBCT images. The localization offsets, and systematic and random errors from both modalities were computed. Localization performances for both modalities were compared using Bland‐Altman limits of agreement (LoA) analysis, Deming regression analysis, and Cohen's kappa inter‐rater analysis. The differences in the systematic and random errors between the modalities were within 0.2 mm in all directions. The LoA analysis revealed a 95% agreement limit of the modalities of 2 to 3.5 mm in any given translational direction. Deming regression analysis demonstrated that constant biases existed in the shifts computed by the modalities in the superior–inferior (SI) direction, but no significant proportional biases were identified in any direction. Cohen's kappa analysis showed good agreement between the modalities in prescribing translational corrections of the couch at 3 and 5 mm action levels. Images obtained from EPID and half‐rotation CBCT showed acceptable agreement for registration of fiducial markers. The seed‐chamfer algorithm for tracking of fiducial markers in CBCT datasets yielded better agreement than the grey‐value matching algorithm with EPID‐based registration. PACS numbers: 87.55.km, 87.55.Qr PMID:23835391
Risk in the Clouds?: Security Issues Facing Government Use of Cloud Computing
NASA Astrophysics Data System (ADS)
Wyld, David C.
Cloud computing is poised to become one of the most important and fundamental shifts in how computing is consumed and used. Forecasts show that government will play a lead role in adopting cloud computing - for data storage, applications, and processing power, as IT executives seek to maximize their returns on limited procurement budgets in these challenging economic times. After an overview of the cloud computing concept, this article explores the security issues facing public sector use of cloud computing and looks to the risk and benefits of shifting to cloud-based models. It concludes with an analysis of the challenges that lie ahead for government use of cloud resources.
Transmission of Hepatitis C Virus during Computed Tomography Scanning with Contrast
Rius, Cristina; Caylà, Joan A.
2008-01-01
Six cases of acute hepatitis C related to computed tomography scanning with contrast were identified in 3 hospitals. A patient with chronic hepatitis C had been subjected to the same procedure immediately before each patient who developed acute infection. Viral molecular analysis showed identity between isolates from cases with acute and chronic hepatitis C. PMID:18258135
Concurrent EEG And NIRS Tomographic Imaging Based on Wearable Electro-Optodes
2014-04-13
Interfaces ( BCIs ), and other systems in the same computational framework. Figure 11 below shows...Improving Brain-‐Computer Interfaces Using Independent Component Analysis, In: Towards Future BCIs , 2012
ERIC Educational Resources Information Center
Çoknaz, Dilsad; Aktag, Isil
2017-01-01
In this study computer self-efficacy of Turkish undergraduate sport management students was investigated. There were a total of 295 sport management students from three universities. Data were collected by survey which was developed by Compeau and Higgins, 1995, translated to Turkish and adapted for students by Aktag, 2013. The results showed that…
ERIC Educational Resources Information Center
Murfin, Brian
1994-01-01
Reports on a study of the effectiveness of computer-mediated communication (CMC) in providing African American and female middle school students with scientist role models. Quantitative and qualitative data gathered by analyzing messages students and scientists posted on a shared electronic bulletin board showed that CMC could be an effective…
Streaming support for data intensive cloud-based sequence analysis.
Issa, Shadi A; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed
2013-01-01
Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of "resources-on-demand" and "pay-as-you-go", scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.
20 plus Years of Computational Fluid Dynamics for the Space Shuttle
NASA Technical Reports Server (NTRS)
Gomez, Reynaldo J., III
2011-01-01
This slide presentation reviews the use of computational fluid dynamics in performing analysis of the space shuttle with particular reference to the return to flight analysis and other shuttle problems. Slides show a comparison of pressure coefficient with the shuttle ascent configuration between the wind tunnel test and the computed values. the evolution of the grid system for the space shuttle launch vehicle (SSLv) from the early 80's to one in 2004, the grid configuration of the bipod ramp redesign from the original design to the current configuration, charts with the computations showing solid rocket booster surface pressures from wind tunnel data, calculated over two grid systems (i.e., the original 14 grid system, and the enhanced 113 grid system), and the computed flight orbiter wing loads are compared with strain gage data on STS-50 during flight. The loss of STS-107 initiated an unprecedented review of all external environments. The current SSLV grid system of 600+ grids, 1.8 Million surface points and 95+ million volume points is shown. The inflight entry analyses is shown, and the use of Overset CFD as a key part to many external tank redesign and debris assessments is discussed. The work that still remains to be accomplished for future shuttle flights is discussed.
ERIC Educational Resources Information Center
Williams, E. D.
1989-01-01
Discussed is a radionuclide imaging technique, including the gamma camera, image analysis computer, radiopharmaceuticals, and positron emission tomography. Several pictures showing the use of this technique are presented. (YP)
Sex Estimation From Sternal Measurements Using Multidetector Computed Tomography
Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur
2014-01-01
Abstract We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation. Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30–60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation. PMID:25501090
NASA Astrophysics Data System (ADS)
Davis, S. J.; Egolf, T. A.
1980-07-01
Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.
Inci, Ercan; Ekizoglu, Oguzhan; Turkay, Rustu; Aksoy, Sema; Can, Ismail Ozgur; Solmaz, Dilek; Sayin, Ibrahim
2016-10-01
Morphometric analysis of the mandibular ramus (MR) provides highly accurate data to discriminate sex. The objective of this study was to demonstrate the utility and accuracy of MR morphometric analysis for sex identification in a Turkish population.Four hundred fifteen Turkish patients (18-60 y; 201 male and 214 female) who had previously had multidetector computed tomography scans of the cranium were included in the study. Multidetector computed tomography images were obtained using three-dimensional reconstructions and a volume-rendering technique, and 8 linear and 3 angular values were measured. Univariate, bivariate, and multivariate discriminant analyses were performed, and the accuracy rates for determining sex were calculated.Mandibular ramus values produced high accuracy rates of 51% to 95.6%. Upper ramus vertical height had the highest rate at 95.6%, and bivariate analysis showed 89.7% to 98.6% accuracy rates with the highest ratios of mandibular flexure upper border and maximum ramus breadth. Stepwise discrimination analysis gave a 99% accuracy rate for all MR variables.Our study showed that the MR, in particular morphometric measures of the upper part of the ramus, can provide valuable data to determine sex in a Turkish population. The method combines both anthropological and radiologic studies.
NASA Astrophysics Data System (ADS)
Yildirim, Serdar; Montanari, Simona; Andersen, Elaine; Narayanan, Shrikanth S.
2003-10-01
Understanding the fine details of children's speech and gestural characteristics helps, among other things, in creating natural computer interfaces. We analyze the acoustic, lexical/non-lexical and spoken/gestural discourse characteristics of young children's speech using audio-video data gathered using a Wizard of Oz technique from 4 to 6 year old children engaged in resolving a series of age-appropriate cognitive challenges. Fundamental and formant frequencies exhibited greater variations between subjects consistent with previous results on read speech [Lee et al., J. Acoust. Soc. Am. 105, 1455-1468 (1999)]. Also, our analysis showed that, in a given bandwidth, phonemic information contained in the speech of young child is significantly less than that of older ones and adults. To enable an integrated analysis, a multi-track annotation board was constructed using the ANVIL tool kit [M. Kipp, Eurospeech 1367-1370 (2001)]. Along with speech transcriptions and acoustic analysis, non-lexical and discourse characteristics, and child's gesture (facial expressions, body movements, hand/head movements) were annotated in a synchronized multilayer system. Initial results showed that younger children rely more on gestures to emphasize their verbal assertions. Younger children use non-lexical speech (e.g., um, huh) associated with frustration and pondering/reflecting more frequently than older ones. Younger children also repair more with humans than with computer.
Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F
2007-01-01
This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.
2006-01-01
This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
NASA Technical Reports Server (NTRS)
White, P. R.; Little, R. R.
1985-01-01
A research effort was undertaken to develop personal computer based software for vibrational analysis. The software was developed to analytically determine the natural frequencies and mode shapes for the uncoupled lateral vibrations of the blade and counterweight assemblies used in a single bladed wind turbine. The uncoupled vibration analysis was performed in both the flapwise and chordwise directions for static rotor conditions. The effects of rotation on the uncoupled flapwise vibration of the blade and counterweight assemblies were evaluated for various rotor speeds up to 90 rpm. The theory, used in the vibration analysis codes, is based on a lumped mass formulation for the blade and counterweight assemblies. The codes are general so that other designs can be readily analyzed. The input for the codes is generally interactive to facilitate usage. The output of the codes is both tabular and graphical. Listings of the codes are provided. Predicted natural frequencies of the first several modes show reasonable agreement with experimental results. The analysis codes were originally developed on a DEC PDP 11/34 minicomputer and then downloaded and modified to run on an ITT XTRA personal computer. Studies conducted to evaluate the efficiency of running the programs on a personal computer as compared with the minicomputer indicated that, with the proper combination of hardware and software options, the efficiency of using a personal computer exceeds that of a minicomputer.
Naturalistic Decision Making: Implications for Design
1993-04-01
Cognitive Task Analysis Decision Making Design Engineer Design System Human-Computer Interface System Development 15. NUMBER OF PAGES 182 16...people use to select a course of action. The SOAR explains how stress affects the decision making of both individuals and teams. COGNITIVE TASK ANALYSIS : This...procedures for Cognitive Task Analysis , contrasting the strengths and weaknesses of each, and showing how a Cognitive Task Analysis
NASA Technical Reports Server (NTRS)
Ohri, A. K.; Owen, H. A.; Wilson, T. G.; Rodriguez, G. E.
1974-01-01
The simulation of converter-controller combinations by means of a flexible digital computer program which produces output to a graphic display is discussed. The procedure is an alternative to mathematical analysis of converter systems. The types of computer programming involved in the simulation are described. Schematic diagrams, state equations, and output equations are displayed for four basic forms of inductor-energy-storage dc to dc converters. Mathematical models are developed to show the relationship of the parameters.
Computational Age Dating of Special Nuclear Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2012-06-30
This slide-show presented an overview of the Constrained Progressive Reversal (CPR) method for computing decays, age dating, and spoof detecting. The CPR method is: Capable of temporal profiling a SNM sample; Precise (compared with known decay code, such a ORIGEN); Easy (for computer implementation and analysis). We have illustrated with real SNM data using CPR for age dating and spoof detection. If SNM is pure, may use CPR to derive its age. If SNM is mixed, CPR will indicate that it is mixed or spoofed.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Improved parallel data partitioning by nested dissection with applications to information retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
Computed tomographic findings of trichuriasis
Tokmak, Naime; Koc, Zafer; Ulusan, Serife; Koltas, Ismail Soner; Bal, Nebil
2006-01-01
In this report, we present computed tomographic findings of colonic trichuriasis. The patient was a 75-year-old man who complained of abdominal pain, and weight loss. Diagnosis was achieved by colonoscopic biopsy. Abdominal computed tomography showed irregular and nodular thickening of the wall of the cecum and ascending colon. Although these findings are nonspecific, they may be one of the findings of trichuriasis. These findings, confirmed by pathologic analysis of the biopsied tissue and Kato-Katz parasitological stool flotation technique, revealed adult Trichuris. To our knowledge, this is the first report of colonic trichuriasis indicated by computed tomography. PMID:16830393
ERIC Educational Resources Information Center
Teo, Timothy
2010-01-01
The purpose of this study is to examine pre-service teachers' attitudes to computers. This study extends the technology acceptance model (TAM) framework by adding subjective norm, facilitating conditions, and technological complexity as external variables. Results show that the TAM and subjective norm, facilitating conditions, and technological…
Simulating three dimensional wave run-up over breakwaters covered by antifer units
NASA Astrophysics Data System (ADS)
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Nicu, Valentin Paul
2016-08-03
Using two illustrative examples it is shown that the generalised coupled oscillator (GCO) mechanism implies that the stability of the VCD sign computed for a given normal mode is not reflected by the magnitude of the ratio ζ between the rotational strength and dipole strength of the respective mode, i.e., the VCD robustness criterium proposed by Góbi and Magyarfalvi. The performed VCD GCO analysis brings further insight into the GCO mechanism and also into the VCD robustness concept. First, it shows that the GCO mechanism can be interpreted as a VCD resonance enhancement mechanism, i.e. very large VCD signals can be observed when the interacting molecular fragments are in favourable orientation. Second, it shows that the uncertainties observed in the computed VCD signs are associated to uncertainties in the relative orientation of the coupled oscillator fragments and/or to uncertainties in the predicted nuclear displacement vectors, i.e. not uncertainties in the computed magnetic dipole transition moments as was originally assumed. Since it is able to identify such situations easily, the VCD GCO analysis can be used as a VCD robustness analysis.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
Effectiveness of a Case-Based Computer Program on Students' Ethical Decision Making.
Park, Eun-Jun; Park, Mihyun
2015-11-01
The aim of this study was to test the effectiveness of a case-based computer program, using an integrative ethical decision-making model, on the ethical decision-making competency of nursing students in South Korea. This study used a pre- and posttest comparison design. Students in the intervention group used a computer program for case analysis assignments, whereas students in the standard group used a traditional paper assignment for case analysis. The findings showed that using the case-based computer program as a complementary tool for the ethics courses offered at the university enhanced students' ethical preparedness and satisfaction with the course. On the basis of the findings, it is recommended that nurse educators use a case-based computer program as a complementary self-study tool in ethics courses to supplement student learning without an increase in course hours, particularly in terms of analyzing ethics cases with dilemma scenarios and exercising ethical decision making. Copyright 2015, SLACK Incorporated.
Quantum computation and analysis of Wigner and Husimi functions: toward a quantum image treatment.
Terraneo, M; Georgeot, B; Shepelyansky, D L
2005-06-01
We study the efficiency of quantum algorithms which aim at obtaining phase-space distribution functions of quantum systems. Wigner and Husimi functions are considered. Different quantum algorithms are envisioned to build these functions, and compared with the classical computation. Different procedures to extract more efficiently information from the final wave function of these algorithms are studied, including coarse-grained measurements, amplitude amplification, and measure of wavelet-transformed wave function. The algorithms are analyzed and numerically tested on a complex quantum system showing different behavior depending on parameters: namely, the kicked rotator. The results for the Wigner function show in particular that the use of the quantum wavelet transform gives a polynomial gain over classical computation. For the Husimi distribution, the gain is much larger than for the Wigner function and is larger with the help of amplitude amplification and wavelet transforms. We discuss the generalization of these results to the simulation of other quantum systems. We also apply the same set of techniques to the analysis of real images. The results show that the use of the quantum wavelet transform allows one to lower dramatically the number of measurements needed, but at the cost of a large loss of information.
NASA Astrophysics Data System (ADS)
Sathya, K.; Dhamodharan, P.; Dhandapani, M.
2018-05-01
A molecular complex, 1H-benzo[d][1,2,3]triazol-3-ium-3,5-dinitrobenzoate, (BTDB), was synthesized, crystallized and characterized by CHN analysis and 1H, 13C NMR spectral studies. The crystal is transparent in entire visible region as evidenced by UV-Vis-NIR spectrum. TG/DTA analysis shows that BTDB is stable up to 150 °C. Single crystal XRD analysis was carried out to ascertain the molecular structure and BTDB crystallizes in the monoclinic system with space group P21/n. Computational studies that include optimization of molecular geometry, natural bond analysis (NBO), Mulliken population analysis and HOMO-LUMO analysis were performed using Gaussian 09 software by B3LYP method at 6-311G(d,p) level. Hirshfeld surfaces and 2D fingerprint plots revealed that O⋯H, H⋯H and O⋯C interactions are the most prevalent. The first order hyperpolarizability (β) of BITB is 44 times greater than urea. The results show that the BTDB may be used for various opto-electronic applications.
Impact of model-based risk analysis for liver surgery planning.
Hansen, C; Zidowitz, S; Preim, B; Stavrou, G; Oldhafer, K J; Hahn, H K
2014-05-01
A model-based risk analysis for oncologic liver surgery was described in previous work (Preim et al. in Proceedings of international symposium on computer assisted radiology and surgery (CARS), Elsevier, Amsterdam, pp. 353–358, 2002; Hansen et al. Int I Comput Assist Radiol Surg 4(5):469–474, 2009). In this paper, we present an evaluation of this method. To prove whether and how the risk analysis facilitates the process of liver surgery planning, an explorative user study with 10 liver experts was conducted. The purpose was to compare and analyze their decision-making. The results of the study show that model-based risk analysis enhances the awareness of surgical risk in the planning stage. Participants preferred smaller resection volumes and agreed more on the safety margins’ width in case the risk analysis was available. In addition, time to complete the planning task and confidence of participants were not increased when using the risk analysis. This work shows that the applied model-based risk analysis may influence important planning decisions in liver surgery. It lays a basis for further clinical evaluations and points out important fields for future research.
Toward using games to teach fundamental computer science concepts
NASA Astrophysics Data System (ADS)
Edgington, Jeffrey Michael
Video and computer games have become an important area of study in the field of education. Games have been designed to teach mathematics, physics, raise social awareness, teach history and geography, and train soldiers in the military. Recent work has created computer games for teaching computer programming and understanding basic algorithms. We present an investigation where computer games are used to teach two fundamental computer science concepts: boolean expressions and recursion. The games are intended to teach the concepts and not how to implement them in a programming language. For this investigation, two computer games were created. One is designed to teach basic boolean expressions and operators and the other to teach fundamental concepts of recursion. We describe the design and implementation of both games. We evaluate the effectiveness of these games using before and after surveys. The surveys were designed to ascertain basic understanding, attitudes and beliefs regarding the concepts. The boolean game was evaluated with local high school students and students in a college level introductory computer science course. The recursion game was evaluated with students in a college level introductory computer science course. We present the analysis of the collected survey information for both games. This analysis shows a significant positive change in student attitude towards recursion and modest gains in student learning outcomes for both topics.
Teaching computer interfacing with virtual instruments in an object-oriented language.
Gulotta, M
1995-01-01
LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given. PMID:8580361
Teaching computer interfacing with virtual instruments in an object-oriented language.
Gulotta, M
1995-11-01
LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given.
NASA Technical Reports Server (NTRS)
Davis, S. J.; Egolf, T. A.
1980-01-01
Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.
An Application of Overset Grids to Payload/Fairing Three-Dimensional Internal Flow CFD Analysis
NASA Technical Reports Server (NTRS)
Kandula, Max; Nallasamy, R.; Schallhorn, P.; Duncil, L.
2007-01-01
The application of overset grids to the computational fluid dynamics analysis of three-dimensional internal flow in the payload/fairing of an expendable launch vehicle is described. In conjunction with the overset grid system, the flowfield in the payload/fairing configuration is obtained with the aid of OVERFLOW Navier-Stokes code. The solution exhibits a highly three dimensional complex flowfield with swirl, separation, and vortices. Some of the computed flow features are compared with the measured Laser-Doppler Velocimetry (LDV) data on a 1/5th scale model of the payload/fairing configuration. The counter-rotating vortex structures and the location of the saddle point predicted by the CFD analysis are in general agreement with the LDV data. Comparisons of the computed (CFD) velocity profiles on horizontal and vertical lines in the LDV measurement plane in the faring nose region show reasonable agreement with the LDV data.
NASA Astrophysics Data System (ADS)
Akın, Ata
2017-12-01
A theoretical framework, a partial correlation-based functional connectivity (PC-FC) analysis to functional near-infrared spectroscopy (fNIRS) data, is proposed. This is based on generating a common background signal from a high passed version of fNIRS data averaged over all channels as the regressor in computing the PC between pairs of channels. This approach has been employed to real data collected during a Stroop task. The results show a strong significance in the global efficiency (GE) metric computed by the PC-FC analysis for neutral, congruent, and incongruent stimuli (NS, CS, IcS; GEN=0.10±0.009, GEC=0.11±0.01, GEIC=0.13±0.015, p=0.0073). A positive correlation (r=0.729 and p=0.0259) is observed between the interference of reaction times (incongruent-neutral) and interference of GE values (GEIC-GEN) computed from [HbO] signals.
Hierarchical Parallelization of Gene Differential Association Analysis
2011-01-01
Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. PMID:21936916
Hierarchical parallelization of gene differential association analysis.
Needham, Mark; Hu, Rui; Dwarkadas, Sandhya; Qiu, Xing
2011-09-21
Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.
Active Flash: Out-of-core Data Analytics on Flash Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less
Imai, Kazuhiro
2015-01-01
Finite element analysis (FEA) is an advanced computer technique of structural stress analysis developed in engineering mechanics. Because the compressive behavior of vertebral bone shows nonlinear behavior, a nonlinear FEA should be utilized to analyze the clinical vertebral fracture. In this article, a computed tomography-based nonlinear FEA (CT/FEA) to analyze the vertebral bone strength, fracture pattern, and fracture location is introduced. The accuracy of the CT/FEA was validated by performing experimental mechanical testing with human cadaveric specimens. Vertebral bone strength and the minimum principal strain at the vertebral surface were accurately analyzed using the CT/FEA. The experimental fracture pattern and fracture location were also accurately simulated. Optimization of the element size was performed by assessing the accuracy of the CT/FEA, and the optimum element size was assumed to be 2 mm. It is expected that the CT/FEA will be valuable in analyzing vertebral fracture risk and assessing therapeutic effects on osteoporosis. PMID:26029476
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
An insight to the molecular interactions of the FDA approved HIV PR drugs against L38L↑N↑L PR mutant
NASA Astrophysics Data System (ADS)
Sanusi, Zainab K.; Govender, Thavendran; Maguire, Glenn E. M.; Maseko, Sibusiso B.; Lin, Johnson; Kruger, Hendrik G.; Honarparvar, Bahareh
2018-03-01
The aspartate protease of the human immune deficiency type-1 virus (HIV-1) has become a crucial antiviral target in which many useful antiretroviral inhibitors have been developed. However, it seems the emergence of new HIV-1 PR mutations enhances drug resistance, hence, the available FDA approved drugs show less activity towards the protease. A mutation and insertion designated L38L↑N↑L PR was recently reported from subtype of C-SA HIV-1. An integrated two-layered ONIOM (QM:MM) method was employed in this study to examine the binding affinities of the nine HIV PR inhibitors against this mutant. The computed binding free energies as well as experimental data revealed a reduced inhibitory activity towards the L38L↑N↑L PR in comparison with subtype C-SA HIV-1 PR. This observation suggests that the insertion and mutations significantly affect the binding affinities or characteristics of the HIV PIs and/or parent PR. The same trend for the computational binding free energies was observed for eight of the nine inhibitors with respect to the experimental binding free energies. The outcome of this study shows that ONIOM method can be used as a reliable computational approach to rationalize lead compounds against specific targets. The nature of the intermolecular interactions in terms of the host-guest hydrogen bond interactions is discussed using the atoms in molecules (AIM) analysis. Natural bond orbital analysis was also used to determine the extent of charge transfer between the QM region of the L38L↑N↑L PR enzyme and FDA approved drugs. AIM analysis showed that the interaction between the QM region of the L38L↑N↑L PR and FDA approved drugs are electrostatic dominant, the bond stability computed from the NBO analysis supports the results from the AIM application. Future studies will focus on the improvement of the computational model by considering explicit water molecules in the active pocket. We believe that this approach has the potential to provide information that will aid in the design of much improved HIV-1 PR antiviral drugs.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
Computational analysis of blood clot dissolution using a vibrating catheter tip.
Lee, Jeong Hyun; Oh, Jin Sun; Yoon, Bye Ri; Choi, Seung Hong; Rhee, Kyehan; Jho, Jae Young; Han, Moon Hee
2012-04-01
We developed a novel concept of endovascular thrombolysis that employs a vibrating electroactive polymer actuator. In order to predict the efficacy of thrombolysis using the developed vibrating actuator, enzyme (plasminogen activator) perfusion into a clot was analyzed by solving flow fields and species transport equations considering the fluid structure interaction. In vitro thrombolysis experiments were also performed. Computational results showed that plasminogen activator perfusion into a clot was enhanced by actuator vibration at frequencies of 1 and 5 Hz. Plasminogen activator perfusion was affected by the actuator oscillation frequencies and amplitudes that were determined by electromechanical characteristics of a polymer actuator. Computed plasminogen activator perfused volumes were compared with experimentally measured dissolved clot volumes. The computed plasminogen activator perfusion volumes with threshold concentrations of 16% of the initial plasminogen activator concentration agreed well with the in vitro experimental data. This study showed the effectiveness of actuator oscillation on thrombolysis and the validity of the computational plasminogen activator perfusion model for predicting thrombolysis in complex flow fields induced by an oscillating actuator.
Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals
NASA Technical Reports Server (NTRS)
Dempsey, Brian Paul
1997-01-01
Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Chimera grids in the simulation of three-dimensional flowfields in turbine-blade-coolant passages
NASA Technical Reports Server (NTRS)
Stephens, M. A.; Rimlinger, M. J.; Shih, T. I.-P.; Civinskas, K. C.
1993-01-01
When computing flows inside geometrically complex turbine-blade coolant passages, the structure of the grid system used can affect significantly the overall time and cost required to obtain solutions. This paper addresses this issue while evaluating and developing computational tools for the design and analysis of coolant-passages, and is divided into two parts. In the first part, the various types of structured and unstructured grids are compared in relation to their ability to provide solutions in a timely and cost-effective manner. This comparison shows that the overlapping structured grids, known as Chimera grids, can rival and in some instances exceed the cost-effectiveness of unstructured grids in terms of both the man hours needed to generate grids and the amount of computer memory and CPU time needed to obtain solutions. In the second part, a computational tool utilizing Chimera grids was used to compute the flow and heat transfer in two different turbine-blade coolant passages that contain baffles and numerous pin fins. These computations showed the versatility and flexibility offered by Chimera grids.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
D Animation Reconstruction from Multi-Camera Coordinates Transformation
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Chou, C. M.
2016-06-01
Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.
ERIC Educational Resources Information Center
WITMER, DAVID R.
WISCONSIN STATE UNIVERSITIES HAVE BEEN USING THE COMPUTER AS A MANAGEMENT TOOL TO STUDY PHYSICAL FACILITIES INVENTORIES, SPACE UTILIZATION, AND ENROLLMENT AND PLANT PROJECTIONS. EXAMPLES ARE SHOWN GRAPHICALLY AND DESCRIBED FOR DIFFERENT TYPES OF ANALYSIS, SHOWING THE CARD FORMAT, CODING SYSTEMS, AND PRINTOUT. EQUATIONS ARE PROVIDED FOR DETERMINING…
Combat Simulation Using Breach Computer Language
1979-09-01
simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model
Computer-assisted semen analysis and its utility for profiling boar semen samples.
Didion, B A
2008-11-01
Achieving and maintaining a successful swine AI program depends on a number of factors, including accurate semen evaluation, typically sperm motility, morphology and concentration. Computer-Assisted Semen Analysis or CASA (i.e., image analysis with a phase-contrast microscope and computer measurements of motion parameters) objectively evaluates sperm motion characteristics, morphology and concentration. A total of 3077 semen collections were evaluated with CASA (on the day of collection), and a semen dose subset was used for single-sire AI of 6266 females over 6 months. Fertility data from these inseminations were fitted with models including farm/stud, line, boar, parity, mating week, semen age at mating and boar age at mating. The residuals from these models showed no correlation for any CASA semen unique motion parameter, which could be due to the level of sperm concentration, the number of inseminations per estrus, and the low number of females mated per boar. Future studies to expand CASA/fertility analysis need to address these constraints and may include analysis of extended boar semen after storage for 1 week.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy; Muramoto, Kyle M.
1990-01-01
Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.
Design of hat-stiffened composite panels loaded in axial compression
NASA Astrophysics Data System (ADS)
Paul, T. K.; Sinha, P. K.
An integrated step-by-step analysis procedure for the design of axially compressed stiffened composite panels is outlined. The analysis makes use of the effective width concept. A computer code, BUSTCOP, is developed incorporating various aspects of buckling such as skin buckling, stiffener crippling and column buckling. Other salient features of the computer code include capabilities for generation of data based on micromechanics theories and hygrothermal analysis, and for prediction of strength failure. Parametric studies carried out on a hat-stiffened structural element indicate that, for all practical purposes, composite panels exhibit higher structural efficiency. Some hybrid laminates with outer layers made of aluminum alloy also show great promise for flight vehicle structural applications.
Aerothermal Analysis of the Project Fire II Afterbody Flow
NASA Technical Reports Server (NTRS)
Wright, Michael J.; Loomis, Mark; Papadopoulos, Periklis; Arnold, James O. (Technical Monitor)
2001-01-01
Computational fluid dynamics (CFD) is used to simulate the wake flow and afterbody heating of the Project Fire II ballistic reentry to Earth at 11.4 km/sec. Laminar results are obtained over a portion of the trajectory between the initial heat pulse and peak afterbody heating. Although non-catalytic forebody convective heating results are in excellent agreement with previous computations, initial predictions of afterbody heating were about a factor of two below the experimental values. Further analysis suggests that significant catalysis may be occurring on the afterbody heat shield. Computations including finite-rate catalysis on the afterbody surface are in good agreement with the data over the early portion of the trajectory, but are conservative near the peak afterbody heating point, especially on the rear portion of the conical frustum. Further analysis of the flight data from Fire II shows that peak afterbody heating occurs before peak forebody heating, a result that contradicts computations and flight data from other entry vehicles. This result suggests that another mechanism, possibly pyrolysis, may be occurring during the later portion of the trajectory, resulting in less total heat transfer than the current predictions.
Streaming Support for Data Intensive Cloud-Based Sequence Analysis
Issa, Shadi A.; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J.; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed
2013-01-01
Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation. PMID:23710461
Computers in general practice: the patient's voice
Potter, A. R.
1981-01-01
Analysis of answers to a questionnaire on the use of computers in general practice showed that 19 per cent of patients in two practices in Staffordshire would be worried if their general practitioner used a computer to store medical records. Twenty-seven per cent of patients would be unwilling to speak frankly about personal matters to their general practitioner if he or she used a computer and 7 per cent said that they would change to another doctor. Fifteen per cent stated that their general practitioner already had information about them that they would not want to be included in a computerized record of their medical history. PMID:7328555
Interactive computer graphics and its role in control system design of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Aircraft requirements for low/medium density markets
NASA Technical Reports Server (NTRS)
Ausrotas, R.; Dodge, S.; Faulkner, H.; Glendinning, I.; Hays, A.; Simpson, R.; Swan, W.; Taneja, N.; Vittek, J.
1973-01-01
A study was conducted to determine the demand for and the economic factors involved in air transportation in a low and medium density market. The subjects investigated are as follows: (1) industry and market structure, (2) aircraft analysis, (3) economic analysis, (4) field surveys, and (5) computer network analysis. Graphs are included to show the economic requirements and the aircraft performance characteristics.
Khan, Asaduzzaman; Western, Mark
The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.
Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools
NASA Astrophysics Data System (ADS)
Boe, Bryce A.
There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.
Fast normal mode computations of capsid dynamics inspired by resonance
NASA Astrophysics Data System (ADS)
Na, Hyuntae; Song, Guang
2018-07-01
Increasingly more and larger structural complexes are being determined experimentally. The sizes of these systems pose a formidable computational challenge to the study of their vibrational dynamics by normal mode analysis. To overcome this challenge, this work presents a novel resonance-inspired approach. Tests on large shell structures of protein capsids demonstrate that there is a strong resonance between the vibrations of a whole capsid and those of individual capsomeres. We then show how this resonance can be taken advantage of to significantly speed up normal mode computations.
Numerical computation of orbits and rigorous verification of existence of snapback repellers.
Peng, Chen-Chang
2007-03-01
In this paper we show how analysis from numerical computation of orbits can be applied to prove the existence of snapback repellers in discrete dynamical systems. That is, we present a computer-assisted method to prove the existence of a snapback repeller of a specific map. The existence of a snapback repeller of a dynamical system implies that it has chaotic behavior [F. R. Marotto, J. Math. Anal. Appl. 63, 199 (1978)]. The method is applied to the logistic map and the discrete predator-prey system.
Rare event computation in deterministic chaotic systems using genealogical particle analysis
NASA Astrophysics Data System (ADS)
Wouters, J.; Bouchet, F.
2016-09-01
In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein-Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function.
Dynamic Analysis Method for Electromagnetic Artificial Muscle Actuator under PID Control
NASA Astrophysics Data System (ADS)
Nakata, Yoshihiro; Ishiguro, Hiroshi; Hirata, Katsuhiro
We have been studying an interior permanent magnet linear actuator for an artificial muscle. This actuator mainly consists of a mover and stator. The mover is composed of permanent magnets, magnetic cores and a non-magnetic shaft. The stator is composed of 3-phase coils and a back yoke. In this paper, the dynamic analysis method under PID control is proposed employing the 3-D finite element method (3-D FEM) to compute the dynamic response and current response when the positioning control is active. As a conclusion, computed results show good agreement with measured ones of a prototype.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Computational Study of Hypersonic Boundary Layer Stability on Cones
NASA Astrophysics Data System (ADS)
Gronvall, Joel Edwin
Due to the complex nature of boundary layer laminar-turbulent transition in hypersonic flows and the resultant effect on the design of re-entry vehicles, there remains considerable interest in developing a deeper understanding of the underlying physics. To that end, the use of experimental observations and computational analysis in a complementary manner will provide the greatest insights. It is the intent of this work to provide such an analysis for two ongoing experimental investigations. The first focuses on the hypersonic boundary layer transition experiments for a slender cone that are being conducted at JAXA's free-piston shock tunnel HIEST facility. Of particular interest are the measurements of disturbance frequencies associated with transition at high enthalpies. The computational analysis provided for these cases included two-dimensional CFD mean flow solutions for use in boundary layer stability analyses. The disturbances in the boundary layer were calculated using the linear parabolized stability equations. Estimates for transition locations, comparisons of measured disturbance frequencies and computed frequencies, and a determination of the type of disturbances present were made. It was found that for the cases where the disturbances were measured at locations where the flow was still laminar but nearly transitional, that the highly amplified disturbances showed reasonable agreement with the computations. Additionally, an investigation of the effects of finite-rate chemistry and vibrational excitation on flows over cones was conducted for a set of theoretical operational conditions at the HIEST facility. The second study focuses on transition in three-dimensional hypersonic boundary layers, and for this the cone at angle of attack experiments being conducted at the Boeing/AFOSR Mach-6 quiet tunnel at Purdue University were examined. Specifically, the effect of surface roughness on the development of the stationary crossflow instability are investigated in this work. One standard mean flow solution and two direct numerical simulations of a slender cone at an angle of attack were computed. The direct numerical simulations included a digitally-filtered, randomly distributed surface roughness and were performed using a high-order, low-dissipation numerical scheme on appropriately resolved grids. Comparisons with experimental observations showed excellent qualitative agreement. Comparisons with similar previous computational work were also made and showed agreement in the wavenumber range of the most unstable crossflow modes.
A Computational Clonal Analysis of the Developing Mouse Limb Bud
Marcon, Luciano; Arqués, Carlos G.; Torres, Miguel S.; Sharpe, James
2011-01-01
A comprehensive spatio-temporal description of the tissue movements underlying organogenesis would be an extremely useful resource to developmental biology. Clonal analysis and fate mappings are popular experiments to study tissue movement during morphogenesis. Such experiments allow cell populations to be labeled at an early stage of development and to follow their spatial evolution over time. However, disentangling the cumulative effects of the multiple events responsible for the expansion of the labeled cell population is not always straightforward. To overcome this problem, we develop a novel computational method that combines accurate quantification of 2D limb bud morphologies and growth modeling to analyze mouse clonal data of early limb development. Firstly, we explore various tissue movements that match experimental limb bud shape changes. Secondly, by comparing computational clones with newly generated mouse clonal data we are able to choose and characterize the tissue movement map that better matches experimental data. Our computational analysis produces for the first time a two dimensional model of limb growth based on experimental data that can be used to better characterize limb tissue movement in space and time. The model shows that the distribution and shapes of clones can be described as a combination of anisotropic growth with isotropic cell mixing, without the need for lineage compartmentalization along the AP and PD axis. Lastly, we show that this comprehensive description can be used to reassess spatio-temporal gene regulations taking tissue movement into account and to investigate PD patterning hypothesis. PMID:21347315
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Yuko, James; Motil, Brian
2009-01-01
When the crew exploration vehicle (CEV) is launched, the spacecraft adaptor (SA) fairings that cover the CEV service module (SM) are exposed to aero heating. Thermal analysis is performed to compute the fairing temperatures and to investigate whether the temperatures are within the material limits for nominal ascent aero heating case. Heating rates from Thermal Environment (TE) 3 aero heating analysis computed by engineers at Marshall Space Flight Center (MSFC) are used in the thermal analysis. Both MSC Patran 2007r1b/Pthermal and C&R Thermal Desktop 5.1/Sinda models are built to validate each other. The numerical results are also compared with those reported by Lockheed Martin (LM) and show a reasonably good agreement.
Techniques of EMG signal analysis: detection, processing, classification and applications
Hussain, M.S.; Mohd-Yasin, F.
2006-01-01
Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694
Simultaneous analysis and design
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1984-01-01
Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.
Computational methods for structural load and resistance modeling
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Millwater, H. R.; Harren, S. V.
1991-01-01
An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.
On computations of variance, covariance and correlation for interval data
NASA Astrophysics Data System (ADS)
Kishida, Masako
2017-02-01
In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.
Ravichandran, Srikanth; Michelucci, Alessandro; del Sol, Antonio
2018-01-01
Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases. PMID:29551980
Ravichandran, Srikanth; Michelucci, Alessandro; Del Sol, Antonio
2018-01-01
Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.
Gordon, H R; Castaño, D J
1989-04-01
For measurement of aerosols over the ocean, the total radiance L(t) backscattered from the top of a stratified atmosphere which contains both stratospheric and tropospheric aerosols of various types has been computed. A similar computation is carried out for an aerosol-free atmosphere yielding the Rayleigh scattered radiance L(r). The difference L(t) - L(r) is shown to be linearly related to the radiance L(as), which the aerosol would produce in the single scattering approximation. This greatly simplifies the application of aerosol models to aerosol analysis by satellite since adding to, or in some way changing, the aerosol model requires no additional multiple scattering computations. In fact, the only multiple computations required for aerosol analysis are those for determining L(r), which can be performed once and for all. The computations are explicitly applied to Band 4 of the CZCS, which, because of its high radiometric sensitivity and excellent calibration, is ideal for studying aerosols over the ocean. Specifically, the constant A in the relationship L(as) = A(-1)(L(t) - L(r)) is given as a function of position along the scan for four typical orbital-solar position scenarios. The computations show that L(as) can be retrieved from L(t) - L(r) with an average error of no more than 5-7% except at the very edges of the scan.
Sabti, Ahmed Abdulateef; Chaichan, Rasha Sami
2014-01-01
This study examines the attitudes of Saudi Arabian high school students toward the use of computer technologies in learning English. The study also discusses the possible barriers that affect and limit the actual usage of computers. Quantitative approach is applied in this research, which involved 30 Saudi Arabia students of a high school in Kuala Lumpur, Malaysia. The respondents comprised 15 males and 15 females with ages between 16 years and 18 years. Two instruments, namely, Scale of Attitude toward Computer Technologies (SACT) and Barriers affecting Students' Attitudes and Use (BSAU) were used to collect data. The Technology Acceptance Model (TAM) of Davis (1989) was utilized. The analysis of the study revealed gender differences in attitudes toward the use of computer technologies in learning English. Female students showed high and positive attitudes towards the use of computer technologies in learning English than males. Both male and female participants demonstrated high and positive perception of Usefulness and perceived Ease of Use of computer technologies in learning English. Three barriers that affected and limited the use of computer technologies in learning English were identified by the participants. These barriers are skill, equipment, and motivation. Among these barriers, skill had the highest effect, whereas motivation showed the least effect.
Computer Activities for Persons With Dementia.
Tak, Sunghee H; Zhang, Hongmei; Patel, Hetal; Hong, Song Hee
2015-06-01
The study examined participant's experience and individual characteristics during a 7-week computer activity program for persons with dementia. The descriptive study with mixed methods design collected 612 observational logs of computer sessions from 27 study participants, including individual interviews before and after the program. Quantitative data analysis included descriptive statistics, correlational coefficients, t-test, and chi-square. Content analysis was used to analyze qualitative data. Each participant averaged 23 sessions and 591min for 7 weeks. Computer activities included slide shows with music, games, internet use, and emailing. On average, they had a high score of intensity in engagement per session. Women attended significantly more sessions than men. Higher education level was associated with a higher number of different activities used per session and more time spent on online games. Older participants felt more tired. Feeling tired was significantly correlated with a higher number of weeks with only one session attendance per week. More anticholinergic medications taken by participants were significantly associated with a higher percentage of sessions with disengagement. The findings were significant at p < .05. Qualitative content analysis indicated tailoring computer activities appropriate to individual's needs and functioning is critical. All participants needed technical assistance. A framework for tailoring computer activities may provide guidance on developing and maintaining treatment fidelity of tailored computer activity interventions among persons with dementia. Practice guidelines and education protocols may assist caregivers and service providers to integrate computer activities into homes and aging services settings. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A network of automatic atmospherics analyzer
NASA Technical Reports Server (NTRS)
Schaefer, J.; Volland, H.; Ingmann, P.; Eriksson, A. J.; Heydt, G.
1980-01-01
The design and function of an atmospheric analyzer which uses a computer are discussed. Mathematical models which show the method of measurement are presented. The data analysis and recording procedures of the analyzer are discussed.
Learning Principal Component Analysis by Using Data from Air Quality Networks
ERIC Educational Resources Information Center
Perez-Arribas, Luis Vicente; Leon-González, María Eugenia; Rosales-Conrado, Noelia
2017-01-01
With the final objective of using computational and chemometrics tools in the chemistry studies, this paper shows the methodology and interpretation of the Principal Component Analysis (PCA) using pollution data from different cities. This paper describes how students can obtain data on air quality and process such data for additional information…
ERIC Educational Resources Information Center
Tompson, George H.; Dass, Parshotam
2000-01-01
Investigates the relative contribution of computer simulations and case studies for improving undergraduate students' self-efficacy in strategic management courses. Results of pre-and post-test data, regression analysis, and analysis of variance show that simulations result in significantly higher improvement in self-efficacy than case studies.…
Three-dimensional thermal analysis of a high-level waste repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altenbach, T.J.
1979-04-01
The analysis used the TRUMP computer code to evaluate the thermal fields for six repository scenarios that studied the effects of room ventilation, room backfill, and repository thermal diffusivity. The results for selected nodes are presented as plots showing the effect of temperature as a function of time. 15 figures, 6 tables.
Analysis of the DFP/AFCS Systems for Compensating Gravity Distortions on the 70-Meter Antenna
NASA Technical Reports Server (NTRS)
Imbriale, William A.; Hoppe, Daniel J.; Rochblatt, David
2000-01-01
This paper presents the theoretical computations showing the expected performances for both systems. The basic analysis tool is a Physical Optics reflector analysis code that was ported to a parallel computer for faster execution times. There are several steps involved in computing the RF performance of the various systems. 1 . A model of the RF distortions of the main reflector is required. This model is based upon measured holography maps of the 70-meter antenna obtained at 3 elevation angles. The holography maps are then processed (using an appropriate gravity mechanical model of the dish) to provide surface distortion maps at all elevation angles. 2. From the surface distortion maps, ray optics is used to determine the theoretical shape of the DFP that will exactly phase compensate the distortions. 3. From the theoretical shape and a NASTRAN mechanical model of the plate, the actuator positions that generate a surface that provides the best RMS fit to the theoretical model are selected. Using the actuator positions and the NASTRAN model provides an accurate description of the actual mirror shape. 4. Starting from the mechanical drawings of the feed, a computed RF feed pattern is generated. This pattern is expanded into a set of spherical wave modes so that a complete near field analysis of the reflector system can be obtained. 5. For the array feed, the excitation coefficients that provide the maximum gain are computed using a phase conjugate technique. The basic experimental geometry consisted of a dual shaped 70-meter antenna system; a refocusing ellipse, a DFP and an array feed system. To provide physical insight to the systems performance, focal plane field plots are presented at several elevations. Curves of predicted performance are shown for the DFP system, monopulse tracking system, AFCS and combined DFP/AFCS system. The calculated results show that the combined DFP/AFCS system is capable of recovering the majority of the gain lost due to gravity distortion.
Classification of breast tissue in mammograms using efficient coding.
Costa, Daniel D; Campos, Lúcio F; Barros, Allan K
2011-06-24
Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.
A critical analysis of computational protein design with sparse residue interaction graphs
Georgiev, Ivelin S.
2017-01-01
Protein design algorithms enumerate a combinatorial number of candidate structures to compute the Global Minimum Energy Conformation (GMEC). To efficiently find the GMEC, protein design algorithms must methodically reduce the conformational search space. By applying distance and energy cutoffs, the protein system to be designed can thus be represented using a sparse residue interaction graph, where the number of interacting residue pairs is less than all pairs of mutable residues, and the corresponding GMEC is called the sparse GMEC. However, ignoring some pairwise residue interactions can lead to a change in the energy, conformation, or sequence of the sparse GMEC vs. the original or the full GMEC. Despite the widespread use of sparse residue interaction graphs in protein design, the above mentioned effects of their use have not been previously analyzed. To analyze the costs and benefits of designing with sparse residue interaction graphs, we computed the GMECs for 136 different protein design problems both with and without distance and energy cutoffs, and compared their energies, conformations, and sequences. Our analysis shows that the differences between the GMECs depend critically on whether or not the design includes core, boundary, or surface residues. Moreover, neglecting long-range interactions can alter local interactions and introduce large sequence differences, both of which can result in significant structural and functional changes. Designs on proteins with experimentally measured thermostability show it is beneficial to compute both the full and the sparse GMEC accurately and efficiently. To this end, we show that a provable, ensemble-based algorithm can efficiently compute both GMECs by enumerating a small number of conformations, usually fewer than 1000. This provides a novel way to combine sparse residue interaction graphs with provable, ensemble-based algorithms to reap the benefits of sparse residue interaction graphs while avoiding their potential inaccuracies. PMID:28358804
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.; ...
2016-11-24
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
NASA Technical Reports Server (NTRS)
Fleming, David P.; Poplawski, J. V.
2002-01-01
Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.
Comparative analysis of two discretizations of Ricci curvature for complex networks.
Samal, Areejit; Sreejith, R P; Gu, Jiao; Liu, Shiping; Saucan, Emil; Jost, Jürgen
2018-06-05
We have performed an empirical comparison of two distinct notions of discrete Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci curvature were developed based on different properties of the classical smooth notion, and thus, the two notions shed light on different aspects of network structure and behavior. Nevertheless, our extensive computational analysis in a wide range of both model and real-world networks shows that the two discretizations of Ricci curvature are highly correlated in many networks. Moreover, we show that if one considers the augmented Forman-Ricci curvature which also accounts for the two-dimensional simplicial complexes arising in graphs, the observed correlation between the two discretizations is even higher, especially, in real networks. Besides the potential theoretical implications of these observations, the close relationship between the two discretizations has practical implications whereby Forman-Ricci curvature can be employed in place of Ollivier-Ricci curvature for faster computation in larger real-world networks whenever coarse analysis suffices.
Computer analysis of sound recordings from two Anasazi sites in northwestern New Mexico
NASA Astrophysics Data System (ADS)
Loose, Richard
2002-11-01
Sound recordings were made at a natural outdoor amphitheater in Chaco Canyon and in a reconstructed great kiva at Aztec Ruins. Recordings included computer-generated tones and swept sine waves, classical concert flute, Native American flute, conch shell trumpet, and prerecorded music. Recording equipment included analog tape deck, digital minidisk recorder, and direct digital recording to a laptop computer disk. Microphones and geophones were used as transducers. The natural amphitheater lies between the ruins of Pueblo Bonito and Chetro Ketl. It is a semicircular arc in a sandstone cliff measuring 500 ft. wide and 75 ft. high. The radius of the arc was verified with aerial photography, and an acoustic ray trace was generated using cad software. The arc is in an overhanging cliff face and brings distant sounds to a line focus. Along this line, there are unusual acoustic effects at conjugate foci. Time history analysis of recordings from both sites showed that a 60-dB reverb decay lasted from 1.8 to 2.0 s, nearly ideal for public performances of music. Echoes from the amphitheater were perceived to be upshifted in pitch, but this was not seen in FFT analysis. Geophones placed on the floor of the great kiva showed a resonance at 95 Hz.
2012-01-01
Background Despite computational challenges, elucidating conformations that a protein system assumes under physiologic conditions for the purpose of biological activity is a central problem in computational structural biology. While these conformations are associated with low energies in the energy surface that underlies the protein conformational space, few existing conformational search algorithms focus on explicitly sampling low-energy local minima in the protein energy surface. Methods This work proposes a novel probabilistic search framework, PLOW, that explicitly samples low-energy local minima in the protein energy surface. The framework combines algorithmic ingredients from evolutionary computation and computational structural biology to effectively explore the subspace of local minima. A greedy local search maps a conformation sampled in conformational space to a nearby local minimum. A perturbation move jumps out of a local minimum to obtain a new starting conformation for the greedy local search. The process repeats in an iterative fashion, resulting in a trajectory-based exploration of the subspace of local minima. Results and conclusions The analysis of PLOW's performance shows that, by navigating only the subspace of local minima, PLOW is able to sample conformations near a protein's native structure, either more effectively or as well as state-of-the-art methods that focus on reproducing the native structure for a protein system. Analysis of the actual subspace of local minima shows that PLOW samples this subspace more effectively that a naive sampling approach. Additional theoretical analysis reveals that the perturbation function employed by PLOW is key to its ability to sample a diverse set of low-energy conformations. This analysis also suggests directions for further research and novel applications for the proposed framework. PMID:22759582
Computational analysis of high resolution unsteady airloads for rotor aeroacoustics
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.
1994-01-01
The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.
Zheng, Xiujuan; Wei, Wentao; Huang, Qiu; Song, Shaoli; Wan, Jieqing; Huang, Gang
2017-01-01
The objective and quantitative analysis of longitudinal single photon emission computed tomography (SPECT) images are significant for the treatment monitoring of brain disorders. Therefore, a computer aided analysis (CAA) method is introduced to extract a change-rate map (CRM) as a parametric image for quantifying the changes of regional cerebral blood flow (rCBF) in longitudinal SPECT brain images. The performances of the CAA-CRM approach in treatment monitoring are evaluated by the computer simulations and clinical applications. The results of computer simulations show that the derived CRMs have high similarities with their ground truths when the lesion size is larger than system spatial resolution and the change rate is higher than 20%. In clinical applications, the CAA-CRM approach is used to assess the treatment of 50 patients with brain ischemia. The results demonstrate that CAA-CRM approach has a 93.4% accuracy of recovered region's localization. Moreover, the quantitative indexes of recovered regions derived from CRM are all significantly different among the groups and highly correlated with the experienced clinical diagnosis. In conclusion, the proposed CAA-CRM approach provides a convenient solution to generate a parametric image and derive the quantitative indexes from the longitudinal SPECT brain images for treatment monitoring.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
NASA Astrophysics Data System (ADS)
Schöbi, Roland; Sudret, Bruno
2017-06-01
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch
2017-06-15
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less
A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing.
Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang
2017-07-24
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.
Preferred computer activities among individuals with dementia: a pilot study.
Tak, Sunghee H; Zhang, Hongmei; Hong, Song Hee
2015-03-01
Computers offer new activities that are easily accessible, cognitively stimulating, and enjoyable for individuals with dementia. The current descriptive study examined preferred computer activities among nursing home residents with different severity levels of dementia. A secondary data analysis was conducted using activity observation logs from 15 study participants with dementia (severe = 115 logs, moderate = 234 logs, and mild = 124 logs) who participated in a computer activity program. Significant differences existed in preferred computer activities among groups with different severity levels of dementia. Participants with severe dementia spent significantly more time watching slide shows with music than those with both mild and moderate dementia (F [2,12] = 9.72, p = 0.003). Preference in playing games also differed significantly across the three groups. It is critical to consider individuals' interests and functional abilities when computer activities are provided for individuals with dementia. A practice guideline for tailoring computer activities is detailed. Copyright 2015, SLACK Incorporated.
Impedance computations and beam-based measurements: A problem of discrepancy
NASA Astrophysics Data System (ADS)
Smaluk, Victor
2018-04-01
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.
Detection of Organophosphorus Pesticides with Colorimetry and Computer Image Analysis.
Li, Yanjie; Hou, Changjun; Lei, Jincan; Deng, Bo; Huang, Jing; Yang, Mei
2016-01-01
Organophosphorus pesticides (OPs) represent a very important class of pesticides that are widely used in agriculture because of their relatively high-performance and moderate environmental persistence, hence the sensitive and specific detection of OPs is highly significant. Based on the inhibitory effect of acetylcholinesterase (AChE) induced by inhibitors, including OPs and carbamates, a colorimetric analysis was used for detection of OPs with computer image analysis of color density in CMYK (cyan, magenta, yellow and black) color space and non-linear modeling. The results showed that there was a gradually weakened trend of yellow intensity with the increase of the concentration of dichlorvos. The quantitative analysis of dichlorvos was achieved by Artificial Neural Network (ANN) modeling, and the results showed that the established model had a good predictive ability between training sets and predictive sets. Real cabbage samples containing dichlorvos were detected by colorimetry and gas chromatography (GC), respectively. The results showed that there was no significant difference between colorimetry and GC (P > 0.05). The experiments of accuracy, precision and repeatability revealed good performance for detection of OPs. AChE can also be inhibited by carbamates, and therefore this method has potential applications in real samples for OPs and carbamates because of high selectivity and sensitivity.
NASA Astrophysics Data System (ADS)
Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.
2016-10-01
X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.
CMS results in the Combined Computing Readiness Challenge CCRC'08
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Bauerdick, L.; CMS Collaboration
2009-12-01
During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed workflows - are presented and discussed.
Characterization of shredded television scrap and implications for materials recovery.
Cui, Jirang; Forssberg, Eric
2007-01-01
Characterization of TV scrap was carried out by using a variety of methods, such as chemical analysis, particle size and shape analysis, liberation degree analysis, thermogravimetric analysis, sink-float test, and IR spectrometry. A comparison of TV scrap, personal computer scrap, and printed circuit board scrap shows that the content of non-ferrous metals and precious metals in TV scrap is much lower than that in personal computer scrap or printed circuit board scrap. It is expected that recycling of TV scrap will not be cost-effective by utilizing conventional manual disassembly. The result of particle shape analysis indicates that the non-ferrous metal particles in TV scrap formed as a variety of shapes; it is much more heterogeneous than that of plastics and printed circuit boards. Furthermore, the separability of TV scrap using density-based techniques was evaluated by the sink-float test. The result demonstrates that a high recovery of copper could be obtained by using an effective gravity separation process. Identification of plastics shows that the major plastic in TV scrap is high impact polystyrene. Gravity separation of plastics may encounter some challenges in separation of plastics from TV scrap because of specific density variations.
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography
Jørgensen, J. S.; Sidky, E. Y.
2015-01-01
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.
Jørgensen, J S; Sidky, E Y
2015-06-13
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
NASA Astrophysics Data System (ADS)
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
Sawicka, Monika; Bedini, Rossella; Pecci, Raffaella; Pameijer, Cornelis Hans; Kmiec, Zbigniew
2012-01-01
The purpose of this study was to demonstrate potential application of micro-computed tomography in the morphometric analysis of the root resorption in extracted human first premolars subjected to the orthodontic force. In one patient treated in the orthodontic clinic two mandibular first premolars subjected to orthodontic force for 4 weeks and one control tooth were selected for micro-computed tomographic analysis. The hardware device used in this study was a desktop X-ray microfocus CT scanner (SkyScan 1072). The morphology of root's surfaces was assessed by TView and Computer Tomography Analyzer (CTAn) softwares (SkyScan, bvba) which allowed analysis of all microscans, identification of root resorption craters and measurement of their length, width and volume. Microscans showed in details the surface morphology of the investigated teeth. The analysis of microscans allowed to detect 3 root resorption cavities in each of the orthodontically moved tooth and only one resorption crater in the control tooth. The volumes of the resorption craters in orthodontically-treated teeth were much larger than in a control tooth. Micro-computed tomography is a reproducible technique for the three-dimensional non-invasive assessment of root's morphology ex vivo. TView and CTan softwares are useful in accurate morphometric measurements of root's resorption.
NASA Astrophysics Data System (ADS)
Gerjuoy, Edward
2005-06-01
The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.
Bonham, Kevin S; Stefan, Melanie I
2017-10-01
While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.
Morphometric analysis - Cone beam computed tomography to predict bone quality and quantity.
Hohlweg-Majert, B; Metzger, M C; Kummer, T; Schulze, D
2011-07-01
Modified quantitative computed tomography is a method used to predict bone quality and quantify the bone mass of the jaw. The aim of this study was to determine whether bone quantity or quality was detected by cone beam computed tomography (CBCT) combined with image analysis. MATERIALS AND PROCEDURES: Different measurements recorded on two phantoms (Siemens phantom, Comac phantom) were evaluated on images taken with the Somatom VolumeZoom (Siemens Medical Solutions, Erlangen, Germany) and the NewTom 9000 (NIM s.r.l., Verona, Italy) in order to calculate a calibration curve. The spatial relationships of six sample cylinders and the repositioning from four pig skull halves relative to adjacent defined anatomical structures were assessed by means of three-dimensional visualization software. The calibration curves for computer tomography (CT) and cone beam computer tomography (CBCT) using the Siemens phantom showed linear correlation in both modalities between the Hounsfield Units (HU) and bone morphology. A correction factor for CBCT was calculated. Exact information about the micromorphology of the bone cylinders was only available using of micro computer tomography. Cone-beam computer tomography is a suitable choice for analysing bone mass, but, it does not give any information about bone quality. 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Chu, Adeline; Mastel-Smith, Beth
2010-01-01
Technology has a great impact on nursing practice. With the increasing numbers of older Americans using computers and the Internet in recent years, nurses have the capability to deliver effective and efficient health education to their patients and the community. Based on the theoretical framework of Bandura's self-efficacy theory, the pilot project reported findings from a 5-week computer course on Internet health searches in older adults, 65 years or older, at a senior activity learning center. Twelve participants were recruited and randomized to either the intervention or the control group. Measures of computer anxiety, computer confidence, and computer self-efficacy scores were analyzed at baseline, at the end of the program, and 6 weeks after the completion of the program. Analysis was conducted with repeated-measures analysis of variance. Findings showed participants who attended a structured computer course on Internet health information retrieval reported lowered anxiety and increased confidence and self-efficacy at the end of the 5-week program and 6 weeks after the completion of the program as compared with participants who were not in the program. The study demonstrated that a computer course can help reduce anxiety and increase confidence and self-efficacy in online health searches in older adults.
Laboratory modeling and analysis of aircraft-lightning interactions
NASA Technical Reports Server (NTRS)
Turner, C. D.; Trost, T. F.
1982-01-01
Modeling studies of the interaction of a delta wing aircraft with direct lightning strikes were carried out using an approximate scale model of an F-106B. The model, which is three feet in length, is subjected to direct injection of fast current pulses supplied by wires, which simulate the lightning channel and are attached at various locations on the model. Measurements are made of the resulting transient electromagnetic fields using time derivative sensors. The sensor outputs are sampled and digitized by computer. The noise level is reduced by averaging the sensor output from ten input pulses at each sample time. Computer analysis of the measured fields includes Fourier transformation and the computation of transfer functions for the model. Prony analysis is also used to determine the natural frequencies of the model. Comparisons of model natural frequencies extracted by Prony analysis with those for in flight direct strike data usually show lower damping in the in flight case. This is indicative of either a lightning channel with a higher impedance than the wires on the model, only one attachment point, or short streamers instead of a long channel.
NASA Astrophysics Data System (ADS)
Singh, Ravindra Kumar; Singh, Ashok Kumar
2017-02-01
A new flavanol-2,4-dinitrophenylhydrazone (FDNP) was synthesized and its structure was confirmed by FT-IR, FT-Raman, 1H NMR, mass spectrometry and elemental analysis. All quantum chemical calculations were carried out at level of density functional theory (DFT) with B3LYP functional using 6-311++ G (d,p) basis atomic set. UV-Vis absorption spectra for the singlet-singlet transition computed for fully optimized ground state geometry using Time-Dependent-Density Functional Theory (TD-DFT) with CAM-B3LYP functional was found to be in consistent with that of experimental findings. Analysis of vibrational (FT-IR and FT-Raman) spectrum and their assignments has been done by computing Potential Energy Distribution (PED) using Gar2ped. HOMO-LUMO analysis was performed and reactivity descriptors were calculated. Calculated global electrophilicity index (ω = 7.986 eV) shows molecule to be a strong electrophile. 1H NMR chemical shift calculated with the help of gauge-including atomic orbital (GIAO) approach shows agreement with experimental data. Various intramolecular interactions were analysed by AIM approach. DFT computed total first static hyperpolarizability (β0 = 189.03 × 10-30 esu) indicates that title molecule can be used as attractive future NLO material. Solvent induced effects on the NLO properties studied by using self-consistent reaction field (SCRF) method shows that β0 value increases with increase in solvent polarity. To study the thermal behaviour of title molecule, thermodynamic properties such as heat capacity, entropy and enthalpy change at various temperatures have been calculated and reported. Molecular docking results suggests title molecule to be a potential kinase inhibitor and might be used in future for designing of new anticancer drug.
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
Lari, Nicoletta; Cavallini, Michela; Rindi, Laura; Iona, Elisabetta; Fattorini, Lanfranco; Garzelli, Carlo
1998-01-01
All but 2 of 63 Mycobacterium avium isolates from distinct geographic areas of Italy exhibited markedly polymorphic, multibanded IS1245 restriction fragment length polymorphism (RFLP) patterns; 2 isolates showed the low-number banding pattern typical of bird isolates. By computer analysis, 41 distinct IS1245 patterns and 10 clusters of essentially identical strains were detected; 40% of the 63 isolates showed genetic relatedness, suggesting the existence of a predominant AIDS-associated IS1245 RFLP pattern. PMID:9817900
Numerosity as a topological invariant.
Kluth, Tobias; Zetzsche, Christoph
2016-01-01
The ability to quickly recognize the number of objects in our environment is a fundamental cognitive function. However, it is far from clear which computations and which actual neural processing mechanisms are used to provide us with such a skill. Here we try to provide a detailed and comprehensive analysis of this issue, which comprises both the basic mathematical foundations and the peculiarities imposed by the structure of the visual system and by the neural computations provided by the visual cortex. We suggest that numerosity should be considered as a mathematical invariant. Making use of concepts from mathematical topology--like connectedness, Betti numbers, and the Gauss-Bonnet theorem--we derive the basic computations suited for the computation of this invariant. We show that the computation of numerosity is possible in a neurophysiologically plausible fashion using only computational elements which are known to exist in the visual cortex. We further show that a fundamental feature of numerosity perception, its Weber property, arises naturally, assuming noise in the basic neural operations. The model is tested on an extended data set (made publicly available). It is hoped that our results can provide a general framework for future research on the invariance properties of the numerosity system.
Cantekin, Kenan; Sekerci, Ahmet Ercan; Buyuk, Suleyman Kutalmis
2013-12-01
Computed tomography (CT) is capable of providing accurate and measurable 3-dimensional images of the third molar. The aims of this study were to analyze the development of the mandibular third molar and its relation to chronological age and to create new reference data for a group of Turkish participants aged 9 to 25 years on the basis of cone-beam CT images. All data were obtained from the patients' records including medical, social, and dental anamnesis and cone-beam CT images of 752 patients. Linear regression analysis was performed to obtain regression formulas for dental age calculation with chronological age and to determine the coefficient of determination (r) for each sex. Statistical analysis showed a strong correlation between age and third-molar development for the males (r2 = 0.80) and the females (r2 = 0.78). Computed tomographic images are clinically useful for accurate and reliable estimation of dental ages of children and youth.
Teodoro, George; Kurc, Tahsin; Andrade, Guilherme; Kong, Jun; Ferreira, Renato; Saltz, Joel
2015-01-01
We carry out a comparative performance study of multi-core CPUs, GPUs and Intel Xeon Phi (Many Integrated Core-MIC) with a microscopy image analysis application. We experimentally evaluate the performance of computing devices on core operations of the application. We correlate the observed performance with the characteristics of computing devices and data access patterns, computation complexities, and parallelization forms of the operations. The results show a significant variability in the performance of operations with respect to the device used. The performances of operations with regular data access are comparable or sometimes better on a MIC than that on a GPU. GPUs are more efficient than MICs for operations that access data irregularly, because of the lower bandwidth of the MIC for random data accesses. We propose new performance-aware scheduling strategies that consider variabilities in operation speedups. Our scheduling strategies significantly improve application performance compared to classic strategies in hybrid configurations. PMID:28239253
Metabolic Flux Analysis in Isotope Labeling Experiments Using the Adjoint Approach.
Mottelet, Stephane; Gaullier, Gil; Sadaka, Georges
2017-01-01
Comprehension of metabolic pathways is considerably enhanced by metabolic flux analysis (MFA-ILE) in isotope labeling experiments. The balance equations are given by hundreds of algebraic (stationary MFA) or ordinary differential equations (nonstationary MFA), and reducing the number of operations is therefore a crucial part of reducing the computation cost. The main bottleneck for deterministic algorithms is the computation of derivatives, particularly for nonstationary MFA. In this article, we explain how the overall identification process may be speeded up by using the adjoint approach to compute the gradient of the residual sum of squares. The proposed approach shows significant improvements in terms of complexity and computation time when it is compared with the usual (direct) approach. Numerical results are obtained for the central metabolic pathways of Escherichia coli and are validated against reference software in the stationary case. The methods and algorithms described in this paper are included in the sysmetab software package distributed under an Open Source license at http://forge.scilab.org/index.php/p/sysmetab/.
NASA Technical Reports Server (NTRS)
Yang, Y. L.; Tan, C. S.; Hawthorne, W. R.
1992-01-01
A computational method, based on a theory for turbomachinery blading design in three-dimensional inviscid flow, is applied to a parametric design study of a radial inflow turbine wheel. As the method requires the specification of swirl distribution, a technique for its smooth generation within the blade region is proposed. Excellent agreements have been obtained between the computed results from this design method and those from direct Euler computations, demonstrating the correspondence and consistency between the two. The computed results indicate the sensitivity of the pressure distribution to a lean in the stacking axis and a minor alteration in the hub/shroud profiles. Analysis based on Navier-Stokes solver shows no breakdown of flow within the designed blade passage and agreement with that from design calculation; thus the flow in the designed turbine rotor closely approximates that of an inviscid one. These calculations illustrate the use of a design method coupled to an analysis tool for establishing guidelines and criteria for designing turbomachinery blading.
[Computer aided diagnosis model for lung tumor based on ensemble convolutional neural network].
Wang, Yuanyuan; Zhou, Tao; Lu, Huiling; Wu, Cuiying; Yang, Pengfei
2017-08-01
The convolutional neural network (CNN) could be used on computer-aided diagnosis of lung tumor with positron emission tomography (PET)/computed tomography (CT), which can provide accurate quantitative analysis to compensate for visual inertia and defects in gray-scale sensitivity, and help doctors diagnose accurately. Firstly, parameter migration method is used to build three CNNs (CT-CNN, PET-CNN, and PET/CT-CNN) for lung tumor recognition in CT, PET, and PET/CT image, respectively. Then, we aimed at CT-CNN to obtain the appropriate model parameters for CNN training through analysis the influence of model parameters such as epochs, batchsize and image scale on recognition rate and training time. Finally, three single CNNs are used to construct ensemble CNN, and then lung tumor PET/CT recognition was completed through relative majority vote method and the performance between ensemble CNN and single CNN was compared. The experiment results show that the ensemble CNN is better than single CNN on computer-aided diagnosis of lung tumor.
Is carpal tunnel syndrome related to computer exposure at work? A review and meta-analysis.
Mediouni, Zakia; de Roquemaurel, Alexis; Dumontier, Christian; Becour, Bertrand; Garrabe, Hélène; Roquelaure, Yves; Descatha, Alexis
2014-02-01
A meta-analysis on epidemiological studies was undertaken to assess association between carpal tunnel syndrome (CTS) and computer work. Four databases (PubMed, Embase, Web of Science, and Base de Donnees de Sante Publique) were searched with cross-references from published reviews. We included recent studies, original epidemiological studies for which the association was assessed with blind reviewing with control group. Relevant associations were extracted, and a metarisk was calculated using the generic variance approach (meta-odds ratio [meta-OR]). Six studies met the criteria for inclusion. Results are contradictory because of heterogeneous work exposure. The meta-OR for computer use was 1.67 (95% confidence interval [CI], 0.79 to 3.55). The meta-OR for keyboarding was 1.11 (95% CI, 0.62 to 1.98) and for mouse 1.94 (95% CI, 0.90 to 4.21). It was not possible to show an association between computer use and CTS, although some particular work circumstances may be associated with CTS.
Heave-pitch-roll analysis and testing of air cushion landing systems
NASA Technical Reports Server (NTRS)
Boghani, A. B.; Captain, K. M.; Wormley, D. N.
1978-01-01
The analytical tools (analysis and computer simulation) needed to explain and predict the dynamic operation of air cushion landing systems (ACLS) is described. The following tasks were performed: the development of improved analytical models for the fan and the trunk; formulation of a heave pitch roll analysis for the complete ACLS; development of a general purpose computer simulation to evaluate landing and taxi performance of an ACLS equipped aircraft; and the verification and refinement of the analysis by comparison with test data obtained through lab testing of a prototype cushion. Demonstration of simulation capabilities through typical landing and taxi simulation of an ACLS aircraft are given. Initial results show that fan dynamics have a major effect on system performance. Comparison with lab test data (zero forward speed) indicates that the analysis can predict most of the key static and dynamic parameters (pressure, deflection, acceleration, etc.) within a margin of a 10 to 25 percent.
Progressive Fracture of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2008-01-01
A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells and the built-up composite structure global fracture are enhanced when internal pressure is combined with shear loads.
A Computational Approach for Probabilistic Analysis of Water Impact Simulations
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.
2009-01-01
NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.
Subsonic Analysis of 0.04-Scale F-16XL Models Using an Unstructured Euler Code
NASA Technical Reports Server (NTRS)
Lessard, Wendy B.
1996-01-01
The subsonic flow field about an F-16XL airplane model configuration was investigated with an inviscid unstructured grid technique. The computed surface pressures were compared to wind-tunnel test results at Mach 0.148 for a range of angles of attack from 0 deg to 20 deg. To evaluate the effect of grid dependency on the solution, a grid study was performed in which fine, medium, and coarse grid meshes were generated. The off-surface vortical flow field was locally adapted and showed improved correlation to the wind-tunnel data when compared to the nonadapted flow field. Computational results are also compared to experimental five-hole pressure probe data. A detailed analysis of the off-body computed pressure contours, velocity vectors, and particle traces are presented and discussed.
Layered Architectures for Quantum Computers and Quantum Repeaters
NASA Astrophysics Data System (ADS)
Jones, Nathan C.
This chapter examines how to organize quantum computers and repeaters using a systematic framework known as layered architecture, where machine control is organized in layers associated with specialized tasks. The framework is flexible and could be used for analysis and comparison of quantum information systems. To demonstrate the design principles in practice, we develop architectures for quantum computers and quantum repeaters based on optically controlled quantum dots, showing how a myriad of technologies must operate synchronously to achieve fault-tolerance. Optical control makes information processing in this system very fast, scalable to large problem sizes, and extendable to quantum communication.
Coalescence computations for large samples drawn from populations of time-varying sizes
Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek
2017-01-01
We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
Natural three-qubit interactions in one-way quantum computing
NASA Astrophysics Data System (ADS)
Tame, M. S.; Paternostro, M.; Kim, M. S.; Vedral, V.
2006-02-01
We address the effects of natural three-qubit interactions on the computational power of one-way quantum computation. A benefit of using more sophisticated entanglement structures is the ability to construct compact and economic simulations of quantum algorithms with limited resources. We show that the features of our study are embodied by suitably prepared optical lattices, where effective three-spin interactions have been theoretically demonstrated. We use this to provide a compact construction for the Toffoli gate. Information flow and two-qubit interactions are also outlined, together with a brief analysis of relevant sources of imperfection.
ELM Meets Urban Big Data Analysis: Case Studies
Chen, Huajun; Chen, Jiaoyan
2016-01-01
In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203
NASA Technical Reports Server (NTRS)
1973-01-01
A computer program for space shuttle orbit injection propulsion system analysis (SOPSA) is described to show the operational characteristics and the computer system requirements. The program was developed as an analytical tool to aid in the preliminary design of propellant feed systems for the space shuttle orbiter main engines. The primary purpose of the program is to evaluate the propellant tank ullage pressure requirements imposed by the need to accelerate propellants rapidly during the engine start sequence. The SOPSA program will generate parametric feed system pressure histories and weight data for a range of nominal feedline sizes.
Tissue classification for laparoscopic image understanding based on multispectral texture analysis
NASA Astrophysics Data System (ADS)
Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena
2016-03-01
Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.
Fast linear feature detection using multiple directional non-maximum suppression.
Sun, C; Vallotton, P
2009-05-01
The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
Large Eddy Simulation of "turbulent-like" flow in intracranial aneurysms
NASA Astrophysics Data System (ADS)
Khan, Muhammad Owais; Chnafa, Christophe; Steinman, David A.; Mendez, Simon; Nicoud, Franck
2016-11-01
Hemodynamic forces are thought to contribute to pathogenesis and rupture of intracranial aneurysms (IA). Recent high-resolution patient-specific computational fluid dynamics (CFD) simulations have highlighted the presence of "turbulent-like" flow features, characterized by transient high-frequency flow instabilities. In-vitro studies have shown that such "turbulent-like" flows can lead to lack of endothelial cell orientation and cell depletion, and thus, may also have relevance to IA rupture risk assessment. From a modelling perspective, previous studies have relied on DNS to resolve the small-scale structures in these flows. While accurate, DNS is clinically infeasible due to high computational cost and long simulation times. In this study, we present the applicability of LES for IAs using a LES/blood flow dedicated solver (YALES2BIO) and compare against respective DNS. As a qualitative analysis, we compute time-averaged WSS and OSI maps, as well as, novel frequency-based WSS indices. As a quantitative analysis, we show the differences in POD eigenspectra for LES vs. DNS and wavelet analysis of intra-saccular velocity traces. Differences in two SGS models (i.e. Dynamic Smagorinsky vs. Sigma) are also compared against DNS, and computational gains of LES are discussed.
Teodoro, George; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2014-01-01
We study and characterize the performance of operations in an important class of applications on GPUs and Many Integrated Core (MIC) architectures. Our work is motivated by applications that analyze low-dimensional spatial datasets captured by high resolution sensors, such as image datasets obtained from whole slide tissue specimens using microscopy scanners. Common operations in these applications involve the detection and extraction of objects (object segmentation), the computation of features of each extracted object (feature computation), and characterization of objects based on these features (object classification). In this work, we have identify the data access and computation patterns of operations in the object segmentation and feature computation categories. We systematically implement and evaluate the performance of these operations on modern CPUs, GPUs, and MIC systems for a microscopy image analysis application. Our results show that the performance on a MIC of operations that perform regular data access is comparable or sometimes better than that on a GPU. On the other hand, GPUs are significantly more efficient than MICs for operations that access data irregularly. This is a result of the low performance of MICs when it comes to random data access. We also have examined the coordinated use of MICs and CPUs. Our experiments show that using a performance aware task strategy for scheduling application operations improves performance about 1.29× over a first-come-first-served strategy. This allows applications to obtain high performance efficiency on CPU-MIC systems - the example application attained an efficiency of 84% on 192 nodes (3072 CPU cores and 192 MICs). PMID:25419088
Critical Literacy: Does Advertising Show Gender and Cultural Stereotyping?
ERIC Educational Resources Information Center
Russo, Elizabeth
1996-01-01
The critical literacy component of an adult program developed skills in analyzing media advertising; using math for data analysis, graphing, and computation; interpreting data; and becoming aware of advertising's part in reenforcing gender roles. (SK)
Traffic analysis and control using image processing
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.
2017-11-01
This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
In November 1988 a worm program invaded several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files. An analysis of the work's decompiled code revealed a battery of attacks by a knowledgeable insider, and demonstrated a number of security weaknesses. The attack occurred in an open network, and little can be inferred about the vulnerabilities of closed networks used for critical operations. The attack showed that passwork protection procedures need review and strengthening. It showed that sets of mutually trusting computers need to be carefully controlled. Sharp public reaction crystalized into a demand for user awareness and accountability in a networked world.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
An Integrated Nonlinear Analysis library - (INA) for solar system plasma turbulence
NASA Astrophysics Data System (ADS)
Munteanu, Costel; Kovacs, Peter; Echim, Marius; Koppan, Andras
2014-05-01
We present an integrated software library dedicated to the analysis of time series recorded in space and adapted to investigate turbulence, intermittency and multifractals. The library is written in MATLAB and provides a graphical user interface (GUI) customized for the analysis of space physics data available online like: Coordinated Data Analysis Web (CDAWeb), Automated Multi Dataset Analysis system (AMDA), Planetary Science Archive (PSA), World Data Center Kyoto (WDC), Ulysses Final Archive (UFA) and Cluster Active Archive (CAA). Three main modules are already implemented in INA : the Power Spectral Density (PSD) Analysis, the Wavelet and Intemittency Analysis and the Probability Density Functions (PDF) analysis.The layered structure of the software allows the user to easily switch between different modules/methods while retaining the same time interval for the analysis. The wavelet analysis module includes algorithms to compute and analyse the PSD, the Scalogram, the Local Intermittency Measure (LIM) or the Flatness parameter. The PDF analysis module includes algorithms for computing the PDFs for a range of scales and parameters fully customizable by the user; it also computes the Flatness parameter and enables fast comparison with standard PDF profiles like, for instance, the Gaussian PDF. The library has been already tested on Cluster and Venus Express data and we will show relevant examples. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS UEFISCDI, project number PN-II-ID PCE-2012-4-0418.
BioPig: Developing Cloud Computing Applications for Next-Generation Sequence Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatia, Karan; Wang, Zhong
Next Generation sequencing is producing ever larger data sizes with a growth rate outpacing Moore's Law. The data deluge has made many of the current sequenceanalysis tools obsolete because they do not scale with data. Here we present BioPig, a collection of cloud computing tools to scale data analysis and management. Pig is aflexible data scripting language that uses Apache's Hadoop data structure and map reduce framework to process very large data files in parallel and combine the results.BioPig extends Pig with capability with sequence analysis. We will show the performance of BioPig on a variety of bioinformatics tasks, includingmore » screeningsequence contaminants, Illumina QA/QC, and gene discovery from metagenome data sets using the Rumen metagenome as an example.« less
Design and Analysis of a Turbopump for a Conceptual Expander Cycle Upper-Stage Engine
NASA Technical Reports Server (NTRS)
Dorney, Daniel J.; Rothermel, Jeffry; Griffin, Lisa W.; Thornton, Randall J.; Forbes, John C.; Skelly, Stephen E.; Huber, Frank W.
2006-01-01
As part of the development of technologies for rocket engines that will power spacecraft to the Moon and Mars, a program was initiated to develop a conceptual upper stage engine with wide flow range capability. The resulting expander cycle engine design employs a radial turbine to allow higher pump speeds and efficiencies. In this paper, the design and analysis of the pump section of the engine are discussed. One-dimensional meanline analyses and three-dimensional unsteady computational fluid dynamics simulations were performed for the pump stage. Configurations with both vaneless and vaned diffusers were investigated. Both the meanline analysis and computational predictions show that the pump will meet the performance objectives. Additional details describing the development of a water flow facility test are also presented.
NASA Technical Reports Server (NTRS)
Chang, I. C.
1984-01-01
A new computer program is presented for calculating the quasi-steady transonic flow past a helicopter rotor blade in hover as well as in forward flight. The program is based on the full potential equations in a blade attached frame of reference and is capable of treating a very general class of rotor blade geometries. Computed results show good agreement with available experimental data for both straight and swept tip blade geometries.
Quantum attack-resistent certificateless multi-receiver signcryption scheme.
Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong
2013-01-01
The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.
2001-10-25
Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for
WALSH, TIMOTHY F.; JONES, ANDREA; BHARDWAJ, MANOJ; ...
2013-04-01
Finite element analysis of transient acoustic phenomena on unbounded exterior domains is very common in engineering analysis. In these problems there is a common need to compute the acoustic pressure at points outside of the acoustic mesh, since meshing to points of interest is impractical in many scenarios. In aeroacoustic calculations, for example, the acoustic pressure may be required at tens or hundreds of meters from the structure. In these cases, a method is needed for post-processing the acoustic results to compute the response at far-field points. In this paper, we compare two methods for computing far-field acoustic pressures, onemore » derived directly from the infinite element solution, and the other from the transient version of the Kirchhoff integral. Here, we show that the infinite element approach alleviates the large storage requirements that are typical of Kirchhoff integral and related procedures, and also does not suffer from loss of accuracy that is an inherent part of computing numerical derivatives in the Kirchhoff integral. In order to further speed up and streamline the process of computing the acoustic response at points outside of the mesh, we also address the nonlinear iterative procedure needed for locating parametric coordinates within the host infinite element of far-field points, the parallelization of the overall process, linear solver requirements, and system stability considerations.« less
Mohammadi, Mehrnoosh; RezaeiDehaghani, Abdollah; Mehrabi, Tayebeh; RezaeiDehaghani, Ali
2016-01-01
As adolescents spend much time on playing computer games, their mental and social effects should be considered. The present study aimed to investigate the association between playing computer games and the mental and social health among male adolescents in Iran in 2014. This is a cross-sectional study conducted on 210 adolescents selected by multi-stage random sampling. Data were collected by Goldberg and Hillier general health (28 items) and Kiez social health questionnaires. The association was tested by Pearson and Spearman correlation coefficients, one-way analysis of variance (ANOVA), and independent t-test. Computer games related factors such as the location, type, length, the adopted device, and mode of playing games were investigated. Results showed that 58.9% of the subjects played games on a computer alone for 1 h at home. Results also revealed that the subjects had appropriate mental health and 83.2% had moderate social health. Results showed a poor significant association between the length of games and social health (r = -0.15, P = 0.03), the type of games and mental health (r = -0.16, P = 0.01), and the device used in playing games and social health (F = 0.95, P = 0.03). The findings showed that adolescents' mental and social health is negatively associated with their playing computer games. Therefore, to promote their health, educating them about the correct way of playing computer games is essential and their parents and school authorities, including nurses working at schools, should determine its relevant factors such as the type, length, and device used in playing such games.
Use of parallel computing for analyzing big data in EEG studies of ambiguous perception
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Grubov, Vadim V.; Kirsanov, Daniil V.
2018-02-01
Problem of interaction between human and machine systems through the neuro-interfaces (or brain-computer interfaces) is an urgent task which requires analysis of large amount of neurophysiological EEG data. In present paper we consider the methods of parallel computing as one of the most powerful tools for processing experimental data in real-time with respect to multichannel structure of EEG. In this context we demonstrate the application of parallel computing for the estimation of the spectral properties of multichannel EEG signals, associated with the visual perception. Using CUDA C library we run wavelet-based algorithm on GPUs and show possibility for detection of specific patterns in multichannel set of EEG data in real-time.
Ghosh, Payel; Chandler, Adam G; Altinmakas, Emre; Rong, John; Ng, Chaan S
2016-01-01
The aim of this study was to investigate the feasibility of shuttle-mode computed tomography (CT) technology for body perfusion applications by quantitatively assessing and correcting motion artifacts. Noncontrast shuttle-mode CT scans (10 phases, 2 nonoverlapping bed locations) were acquired from 4 patients on a GE 750HD CT scanner. Shuttling effects were quantified using Euclidean distances (between-phase and between-bed locations) of corresponding fiducial points on the shuttle and reference phase scans (prior to shuttle mode). Motion correction with nonrigid registration was evaluated using sum-of-squares differences and distances between centers of segmented volumes of interest on shuttle and references images. Fiducial point analysis showed an average shuttling motion of 0.85 ± 1.05 mm (between-bed) and 1.18 ± 1.46 mm (between-phase), respectively. The volume-of-interest analysis of the nonrigid registration results showed improved sum-of-squares differences from 2950 to 597, between-bed distance from 1.64 to 1.20 mm, and between-phase distance from 2.64 to 1.33 mm, respectively, averaged over all cases. Shuttling effects introduced during shuttle-mode CT acquisitions can be computationally corrected for body perfusion applications.
3D-Printed Tissue-Mimicking Phantoms for Medical Imaging and Computational Validation Applications
Shahmirzadi, Danial; Li, Ronny X.; Doyle, Barry J.; Konofagou, Elisa E.; McGloughlin, Tim M.
2014-01-01
Abstract Abdominal aortic aneurysm (AAA) is a permanent, irreversible dilation of the distal region of the aorta. Recent efforts have focused on improved AAA screening and biomechanics-based failure prediction. Idealized and patient-specific AAA phantoms are often employed to validate numerical models and imaging modalities. To produce such phantoms, the investment casting process is frequently used, reconstructing the 3D vessel geometry from computed tomography patient scans. In this study the alternative use of 3D printing to produce phantoms is investigated. The mechanical properties of flexible 3D-printed materials are benchmarked against proven elastomers. We demonstrate the utility of this process with particular application to the emerging imaging modality of ultrasound-based pulse wave imaging, a noninvasive diagnostic methodology being developed to obtain regional vascular wall stiffness properties, differentiating normal and pathologic tissue in vivo. Phantom wall displacements under pulsatile loading conditions were observed, showing good correlation to fluid–structure interaction simulations and regions of peak wall stress predicted by finite element analysis. 3D-printed phantoms show a strong potential to improve medical imaging and computational analysis, potentially helping bridge the gap between experimental and clinical diagnostic tools. PMID:28804733
Logic as Marr's Computational Level: Four Case Studies.
Baggio, Giosuè; van Lambalgen, Michiel; Hagoort, Peter
2015-04-01
We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition. Copyright © 2014 Cognitive Science Society, Inc.
Multiscale Multifunctional Progressive Fracture of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Minnetyan, L.
2012-01-01
A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells. Global fracture is enhanced when internal pressure is combined with shear loads. The old reference denotes that nothing has been added to this comprehensive report since then.
Analysis of rotor vibratory loads using higher harmonic pitch control
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Bliss, Donald B.; Boschitsch, Alexander H.; Wachspress, Daniel A.
1992-01-01
Experimental studies of isolated rotors in forward flight have indicated that higher harmonic pitch control can reduce rotor noise. These tests also show that such pitch inputs can generate substantial vibratory loads. The modification is summarized of the RotorCRAFT (Computation of Rotor Aerodynamics in Forward flighT) analysis of isolated rotors to study the vibratory loading generated by high frequency pitch inputs. The original RotorCRAFT code was developed for use in the computation of such loading, and uses a highly refined rotor wake model to facilitate this task. The extended version of RotorCRAFT incorporates a variety of new features including: arbitrary periodic root pitch control; computation of blade stresses and hub loads; improved modeling of near wake unsteady effects; and preliminary implementation of a coupled prediction of rotor airloads and noise. Correlation studies are carried out with existing blade stress and vibratory hub load data to assess the performance of the extended code.
Analysis on laser plasma emission for characterization of colloids by video-based computer program
NASA Astrophysics Data System (ADS)
Putri, Kirana Yuniati; Lumbantoruan, Hendra Damos; Isnaeni
2016-02-01
Laser-induced breakdown detection (LIBD) is a sensitive technique for characterization of colloids with small size and low concentration. There are two types of detection, optical and acoustic. Optical LIBD employs CCD camera to capture the plasma emission and uses the information to quantify the colloids. This technique requires sophisticated technology which is often pricey. In order to build a simple, home-made LIBD system, a dedicated computer program based on MATLAB™ for analyzing laser plasma emission was developed. The analysis was conducted by counting the number of plasma emissions (breakdowns) during a certain period of time. Breakdown probability provided information on colloid size and concentration. Validation experiment showed that the computer program performed well on analyzing the plasma emissions. Optical LIBD has A graphical user interface (GUI) was also developed to make the program more user-friendly.
Product or waste? Importation and end-of-life processing of computers in Peru.
Kahhat, Ramzy; Williams, Eric
2009-08-01
This paper considers the importation of used personal computers (PCs) in Peru and domestic practices in their production, reuse, and end-of-life processing. The empirical pillars of this study are analysis of government data describing trade in used and new computers and surveys and interviews of computer sellers, refurbishers, and recyclers. The United States is the primary source of used PCs imported to Peru. Analysis of shipment value (as measured by trade statistics) shows that 87-88% of imported used computers had a price higher than the ideal recycle value of constituent materials. The official trade in end-of-life computers is thus driven by reuse as opposed to recycling. The domestic reverse supply chain of PCs is well developed with extensive collection, reuse, and recycling. Environmental problems identified include open burning of copper-bearing wires to remove insulation and landfilling of CRT glass. Distinct from informal recycling in China and India, printed circuit boards are usually not recycled domestically but exported to Europe for advanced recycling or to China for (presumably) informal recycling. It is notable that purely economic considerations lead to circuit boards being exported to Europe where environmental standards are stringent, presumably due to higher recovery of precious metals.
Computer image analysis of etched tracks from ionizing radiation
NASA Technical Reports Server (NTRS)
Blanford, George E.
1994-01-01
I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.
Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A
2017-11-01
Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Distributed collaborative response surface method for mechanical dynamic assembly reliability design
NASA Astrophysics Data System (ADS)
Bai, Guangchen; Fei, Chengwei
2013-11-01
Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
NASA Astrophysics Data System (ADS)
Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae
2012-09-01
This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2000-01-01
The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.
Interlake production established using quantitative hydrocarbon well-log analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lancaster, J.; Atkinson, A.
1988-07-01
Production was established in a new pay zone of the basal Interlake Formation adjacent to production in Midway field in Williams County, North Dakota. Hydrocarbon saturation, which was computed using hydrocarbon well-log (mud-log) data, and computed permeability encouraged the operator to run casing and test this zone. By use of drilling rig parameters, drilling mud properties, hydrocarbon-show data from the mud log, drilled rock and porosity descriptions, and wireline log porosity, this new technique computes oil saturation (percent of porosity) and permeability to the invading filtrate, using the Darcy equation. The Leonardo Fee well was drilled to test the Devonianmore » Duperow, the Silurian upper Interlake, and the Ordovician Red River. The upper two objectives were penetrated downdip from Midway production and there were no hydrocarbon shows. It was determined that the Red River was tight, based on sample examination by well site personnel. The basal Interlake, however, liberated hydrocarbon shows that were analyzed by this new technology. The results of this evaluation accurately predicted this well would be a commercial success when placed in production. Where geophysical log analysis might be questionable, this new evaluation technique may provide answers to anticipated oil saturation and producibility. The encouraging results of hydrocarbon saturation and permeability, produced by this technique, may be largely responsible for the well being in production today.« less
A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris
2008-01-01
NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.
Global spectral graph wavelet signature for surface analysis of carpal bones
NASA Astrophysics Data System (ADS)
Masoumi, Majid; Rezaei, Mahsa; Ben Hamza, A.
2018-02-01
Quantitative shape comparison is a fundamental problem in computer vision, geometry processing and medical imaging. In this paper, we present a spectral graph wavelet approach for shape analysis of carpal bones of the human wrist. We employ spectral graph wavelets to represent the cortical surface of a carpal bone via the spectral geometric analysis of the Laplace-Beltrami operator in the discrete domain. We propose global spectral graph wavelet (GSGW) descriptor that is isometric invariant, efficient to compute, and combines the advantages of both low-pass and band-pass filters. We perform experiments on shapes of the carpal bones of ten women and ten men from a publicly-available database of wrist bones. Using one-way multivariate analysis of variance (MANOVA) and permutation testing, we show through extensive experiments that the proposed GSGW framework gives a much better performance compared to the global point signature embedding approach for comparing shapes of the carpal bones across populations.
Mazumdar, Maitreyi; Pandharipande, Pari; Poduri, Annapurna
2007-02-01
A recent trial suggested that albendazole reduces seizures in adults with neurocysticercosis. There is still no consensus regarding optimal management of neurocysticercosis in children. The authors conducted a systematic review and meta-analysis to assess the efficacy of albendazole in children with neurocysticercosis, by searching the Cochrane Databases, MEDLINE, EMBASE, and LILACS. Three reviewers extracted data using an intent-to-treat analysis. Random effects models were used to estimate relative risks. Four randomized trials were selected for meta-analysis, and 10 observational studies were selected for qualitative review. The relative risk of seizure remission in treatment versus control was 1.26 (1.09, 1.46). The relative risk of improvement in computed tomography in these trials was 1.15 (0.97, 1.36). Review of observational studies showed conflicting results, likely owing to preferential administration of albendazole to sicker children.
Global spectral graph wavelet signature for surface analysis of carpal bones.
Masoumi, Majid; Rezaei, Mahsa; Ben Hamza, A
2018-02-05
Quantitative shape comparison is a fundamental problem in computer vision, geometry processing and medical imaging. In this paper, we present a spectral graph wavelet approach for shape analysis of carpal bones of the human wrist. We employ spectral graph wavelets to represent the cortical surface of a carpal bone via the spectral geometric analysis of the Laplace-Beltrami operator in the discrete domain. We propose global spectral graph wavelet (GSGW) descriptor that is isometric invariant, efficient to compute, and combines the advantages of both low-pass and band-pass filters. We perform experiments on shapes of the carpal bones of ten women and ten men from a publicly-available database of wrist bones. Using one-way multivariate analysis of variance (MANOVA) and permutation testing, we show through extensive experiments that the proposed GSGW framework gives a much better performance compared to the global point signature embedding approach for comparing shapes of the carpal bones across populations.
Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.
2011-01-01
Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029
de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549
Privacy-preserving GWAS analysis on federated genomic datasets.
Constable, Scott D; Tang, Yuzhe; Wang, Shuang; Jiang, Xiaoqian; Chapin, Steve
2015-01-01
The biomedical community benefits from the increasing availability of genomic data to support meaningful scientific research, e.g., Genome-Wide Association Studies (GWAS). However, high quality GWAS usually requires a large amount of samples, which can grow beyond the capability of a single institution. Federated genomic data analysis holds the promise of enabling cross-institution collaboration for effective GWAS, but it raises concerns about patient privacy and medical information confidentiality (as data are being exchanged across institutional boundaries), which becomes an inhibiting factor for the practical use. We present a privacy-preserving GWAS framework on federated genomic datasets. Our method is to layer the GWAS computations on top of secure multi-party computation (MPC) systems. This approach allows two parties in a distributed system to mutually perform secure GWAS computations, but without exposing their private data outside. We demonstrate our technique by implementing a framework for minor allele frequency counting and χ2 statistics calculation, one of typical computations used in GWAS. For efficient prototyping, we use a state-of-the-art MPC framework, i.e., Portable Circuit Format (PCF) 1. Our experimental results show promise in realizing both efficient and secure cross-institution GWAS computations.
Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.
NASA Astrophysics Data System (ADS)
Sathya, K.; Dhamodharan, P.; Dhandapani, M.
2017-06-01
Single crystals of 1H-benzo[d]imidazol-3-ium-3,5-dinitrobenzoate (BDNB) were grown by reacting 3,5-dinitrobenzoic acid and benzimidazole by slow evaporation method. UV-Vis-NIR spectral studies of the BDNB show that the crystal is excellently transparent in entire visible region. Chemically and magnetically equivalent protons in BDNB were identified by 1H NMR technique. The carbon frame work of the molecule was established by 13C NMR spectroscopy. Proton transfer mechanism was confirmed by the presence of N+H group in BDNB by FT-IR spectroscopic technique. TG/DTA analyses confirmed that the crystal is stable up to172 °C. Single crystal XRD analysis was carried out to ascertain the molecular structure and the crystal belongs to monoclinic system with space group P21/c. Computational studies that include optimization of molecular geometry, natural bond analysis, Mulliken population analysis and HOMO-LUMO analysis were performed using B3LYP method at 6-31 g level. The low HOMO-LUMO energy gap of BDNB confirms high reactivity of BDNB. Hirshfeld analysis expose that O⋯H/H⋯O interactions are the prominent interactions. Theoretical calculations indicate that first order hyperpolarizability is 16 times greater than urea. The results show that the BDNB may be used for opto-electronic applications. The antimicrobial and antioxidant analyses shows concentration of the compound increases inhibition activity also increases.
Al-Ruqaie, I.; Al-Khalifah, N.S.; Shanavaskhan, A.E.
2015-01-01
Varietal identification of olives is an intrinsic and empirical exercise owing to the large number of synonyms and homonyms, intensive exchange of genotypes, presence of varietal clones and lack of proper certification in nurseries. A comparative study of morphological characters of eight olive cultivars grown in Saudi Arabia was carried out and analyzed using NTSYSpc (Numerical Taxonomy System for personal computer) system segregated smaller fruits in one clade and the rest in two clades. Koroneiki, a Greek cultivar with a small sized fruit shared arm with Spanish variety Arbosana. Morphologic analysis using NTSYSpc revealed that biometrics of leaves, fruits and seeds are reliable morphologic characters to distinguish between varieties, except for a few morphologically very similar olive cultivars. The proximate analysis showed significant variations in the protein, fiber, crude fat, ash and moisture content of different cultivars. The study also showed that neither the size of fruit nor the fruit pulp thickness is a limiting factor determining crude fat content of olives. PMID:26858547
Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy
NASA Astrophysics Data System (ADS)
Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.
2017-10-01
Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John
2018-01-19
A main goal in DNA computing is to build DNA circuits to compute designated functions using a minimal number of DNA strands. Here, we propose a novel architecture to build compact DNA strand displacement circuits to compute a broad scope of functions in an analog fashion. A circuit by this architecture is composed of three autocatalytic amplifiers, and the amplifiers interact to perform computation. We show DNA circuits to compute functions sqrt(x), ln(x) and exp(x) for x in tunable ranges with simulation results. A key innovation in our architecture, inspired by Napier's use of logarithm transforms to compute square roots on a slide rule, is to make use of autocatalytic amplifiers to do logarithmic and exponential transforms in concentration and time. In particular, we convert from the input that is encoded by the initial concentration of the input DNA strand, to time, and then back again to the output encoded by the concentration of the output DNA strand at equilibrium. This combined use of strand-concentration and time encoding of computational values may have impact on other forms of molecular computation.
2017-01-01
While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance. PMID:29023441
Impedance computations and beam-based measurements: A problem of discrepancy
Smaluk, Victor
2018-04-21
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less
Impedance computations and beam-based measurements: A problem of discrepancy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smaluk, Victor
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less
A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing
Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang
2017-01-01
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient. PMID:28737733
Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud.
Cianfrocco, Michael A; Leschziner, Andres E
2015-05-08
The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available 'off-the-shelf' computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16-480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM.
Agent-based model to rural urban migration analysis
NASA Astrophysics Data System (ADS)
Silveira, Jaylson J.; Espíndola, Aquino L.; Penna, T. J. P.
2006-05-01
In this paper, we analyze the rural-urban migration phenomenon as it is usually observed in economies which are in the early stages of industrialization. The analysis is conducted by means of a statistical mechanics approach which builds a computational agent-based model. Agents are placed on a lattice and the connections among them are described via an Ising-like model. Simulations on this computational model show some emergent properties that are common in developing economies, such as a transitional dynamics characterized by continuous growth of urban population, followed by the equalization of expected wages between rural and urban sectors (Harris-Todaro equilibrium condition), urban concentration and increasing of per capita income.
Verification of a Viscous Computational Aeroacoustics Code using External Verification Analysis
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Hixon, Ray
2015-01-01
The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.
Verification of a Viscous Computational Aeroacoustics Code Using External Verification Analysis
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Hixon, Ray
2015-01-01
The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.
Fast focus estimation using frequency analysis in digital holography.
Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung
2014-11-17
A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.
An Expert Assistant for Computer Aided Parallelization
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Chun, Robert; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
The prototype implementation of an expert system was developed to assist the user in the computer aided parallelization process. The system interfaces to tools for automatic parallelization and performance analysis. By fusing static program structure information and dynamic performance analysis data the expert system can help the user to filter, correlate, and interpret the data gathered by the existing tools. Sections of the code that show poor performance and require further attention are rapidly identified and suggestions for improvements are presented to the user. In this paper we describe the components of the expert system and discuss its interface to the existing tools. We present a case study to demonstrate the successful use in full scale scientific applications.
Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.
2001-01-01
A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.
DOT National Transportation Integrated Search
2013-08-01
This research aimed to evaluate the data requirements for computer assisted construction planning : and staging methods that can be implemented in pavement rehabilitation projects in the state of : Georgia. Results showed that two main issues for the...
A Computer-Aided Distinction Method of Borderline Grades of Oral Cancer
NASA Astrophysics Data System (ADS)
Sami, Mustafa M.; Saito, Masahisa; Muramatsu, Shogo; Kikuchi, Hisakazu; Saku, Takashi
We have developed a new computer-aided diagnostic system for differentiating oral borderline malignancies in hematoxylin-eosin stained microscopic images. Epithelial dysplasia and carcinoma in-situ (CIS) of oral mucosa are two different borderline grades similar to each other, and it is difficult to distinguish between them. A new image processing and analysis method has been applied to a variety of histopathological features and shows the possibility for differentiating the oral cancer borderline grades automatically. The method is based on comparing the drop-shape similarity level in a particular manually selected pair of neighboring rete ridges. It was found that the considered similarity level in dysplasia was higher than those in epithelial CIS, of which pathological diagnoses were conventionally made by pathologists. The developed image processing method showed a good promise for the computer-aided pathological assessment of oral borderline malignancy differentiation in clinical practice.
Children's strategies to solving additive inverse problems: a preliminary analysis
NASA Astrophysics Data System (ADS)
Ding, Meixia; Auxter, Abbey E.
2017-03-01
Prior studies show that elementary school children generally "lack" formal understanding of inverse relations. This study goes beyond lack to explore what children might "have" in their existing conception. A total of 281 students, kindergarten to third grade, were recruited to respond to a questionnaire that involved both contextual and non-contextual tasks on inverse relations, requiring both computational and explanatory skills. Results showed that children demonstrated better performance in computation than explanation. However, many students' explanations indicated that they did not necessarily utilize inverse relations for computation. Rather, they appeared to possess partial understanding, as evidenced by their use of part-whole structure, which is a key to understanding inverse relations. A close inspection of children's solution strategies further revealed that the sophistication of children's conception of part-whole structure varied in representation use and unknown quantity recognition, which suggests rich opportunities to develop students' understanding of inverse relations in lower elementary classrooms.
NASA Astrophysics Data System (ADS)
Nataraj, A.; Balachandran, V.; Karthick, T.
2012-08-01
The Fourier transform infrared (FT-IR) and FT-Raman of 3-nitro-p-toluic acid (NTA) have been recorded and analyzed. The equilibrium geometry, bonding features and harmonic vibrational frequencies have been investigated with the help of ab initio and density functional theory (DFT) methods. The assignments of the vibrational spectra have been carried out with the help of normal coordinate analysis (NCA) following the scaled quantum mechanical force field methodology (SQMFF). The optimized geometric bond lengths and bond angles obtained by computation show good agreement with experimental data of the relative compound. The computed dimer parameters also show good agreement with experimental data. The first hyperpolarizability (β0) of this noval molecular system and related properties (β, α0, and Δα) of NTA are calculated using B3LYP/6-311++G(d,p) method on the finite-field approach. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using natural bond orbital (NBO) analysis. The results show that charge in electron density (ED) in the σ* and π* antibonding orbital and second order delocalization energies E(2) confirms the occurrence of intramolecular charge transfer (ICT) within the molecule. The calculated HOMO and LUMO energies also show that charge transfer occurs within the molecule. Finally the calculations results were applied to simulated infrared and Raman spectra of the title compound which show good agreement with observed spectra.
Castelló Castañeda, Coral; Ríos Santos, Jose Vicente; Bullón, Pedro
2008-01-01
Dentists are currently required to make multiple diagnoses and treatment decisions every day and the information necessary to achieve this satisfactorily doubles in volume every five years. Knowledge therefore rapidly becomes out of date, so that it is often impossible to remember established information and assimilate new concepts. This may result in a significant lack of knowledge in the future, which would jeopardize the success of treatments. To remedy this situation and to prevent it, we nowadays have access to modern computing systems, with an extensive data base, which helps us to retain the information necessary for daily practice and access it instantaneously. The objectives of this study are therefore to determine how widespread the use of computing is in this environment and to determine the opinion of students and qualified dentists as regards its use in Dentistry. 90 people were chosen to take part in the study, divided into the following groups (students) (newly qualified dentists) (experts). It has been demonstrated that a high percentage (93.30%) use a computer, but that their level of computing knowledge is predominantly moderate. The place where a computer is used most is the home, which suggests that the majority own a computer. Analysis of the results obtained for evaluation of computers in teaching showed that the participants thought that it saved a great deal of time and had great potential for providing an image (in terms of marketing) and they considered it a very innovative and stimulating tool.
Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.
Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos
2016-05-05
Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.
Computational and Experimental Analysis of the Secretome of Methylococcus capsulatus (Bath)
Indrelid, Stine; Mathiesen, Geir; Jacobsen, Morten; Lea, Tor; Kleiveland, Charlotte R.
2014-01-01
The Gram-negative methanotroph Methylococcus capsulatus (Bath) was recently demonstrated to abrogate inflammation in a murine model of inflammatory bowel disease, suggesting interactions with cells involved in maintaining mucosal homeostasis and emphasizing the importance of understanding the many properties of M. capsulatus. Secreted proteins determine how bacteria may interact with their environment, and a comprehensive knowledge of such proteins is therefore vital to understand bacterial physiology and behavior. The aim of this study was to systematically analyze protein secretion in M. capsulatus (Bath) by identifying the secretion systems present and the respective secreted substrates. Computational analysis revealed that in addition to previously recognized type II secretion systems and a type VII secretion system, a type Vb (two-partner) secretion system and putative type I secretion systems are present in M. capsulatus (Bath). In silico analysis suggests that the diverse secretion systems in M.capsulatus transport proteins likely to be involved in adhesion, colonization, nutrient acquisition and homeostasis maintenance. Results of the computational analysis was verified and extended by an experimental approach showing that in addition an uncharacterized protein and putative moonlighting proteins are released to the medium during exponential growth of M. capsulatus (Bath). PMID:25479164
Computational Fatigue Life Analysis of Carbon Fiber Laminate
NASA Astrophysics Data System (ADS)
Shastry, Shrimukhi G.; Chandrashekara, C. V., Dr.
2018-02-01
In the present scenario, many traditional materials are being replaced by composite materials for its light weight and high strength properties. Industries like automotive industry, aerospace industry etc., are some of the examples which uses composite materials for most of its components. Replacing of components which are subjected to static load or impact load are less challenging compared to components which are subjected to dynamic loading. Replacing the components made up of composite materials demands many stages of parametric study. One such parametric study is the fatigue analysis of composite material. This paper focuses on the fatigue life analysis of the composite material by using computational techniques. A composite plate is considered for the study which has a hole at the center. The analysis is carried on (0°/90°/90°/90°/90°)s laminate sequence and (45°/-45°)2s laminate sequence by using a computer script. The life cycles for both the lay-up sequence are compared with each other. It is observed that, for the same material and geometry of the component, cross ply laminates show better fatigue life than that of angled ply laminates.
From macro-scale to micro-scale computational anatomy: a perspective on the next 20 years.
Mori, Kensaku
2016-10-01
This paper gives our perspective on the next two decades of computational anatomy, which has made great strides in the recognition and understanding of human anatomy from conventional clinical images. The results from this field are now used in a variety of medical applications, including quantitative analysis of organ shapes, interventional assistance, surgical navigation, and population analysis. Several anatomical models have also been used in computational anatomy, and these mainly target millimeter-scale shapes. For example, liver-shape models are almost completely modeled at the millimeter scale, and shape variations are described at such scales. Most clinical 3D scanning devices have had just under 1 or 0.5 mm per voxel resolution for over 25 years, and this resolution has not changed drastically in that time. Although Z-axis (head-to-tail direction) resolution has been drastically improved by the introduction of multi-detector CT scanning devices, in-plane resolutions have not changed very much either. When we look at human anatomy, we can see different anatomical structures at different scales. For example, pulmonary blood vessels and lung lobes can be observed in millimeter-scale images. If we take 10-µm-scale images of a lung specimen, the alveoli and bronchiole regions can be located in them. Most work in millimeter-scale computational anatomy has been done by the medical-image analysis community. In the next two decades, we encourage our community to focus on micro-scale computational anatomy. In this perspective paper, we briefly review the achievements of computational anatomy and its impacts on clinical applications; furthermore, we show several possibilities from the viewpoint of microscopic computational anatomy by discussing experimental results from our recent research activities. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Casartelli, E.; Mangani, L.; Ryan, O.; Schmid, A.
2016-11-01
CFD has entered the product development process in hydraulic machines since more than three decades. Beside the actual design process, in which the most appropriate geometry for a certain task is iteratively sought, several steady-state simulations and related analyses are performed with the help of CFD. Basic transient CFD-analysis is becoming more and more routine for rotor-stator interaction assessment, but in general unsteady CFD is still not standard due to the large computational effort. Especially for FSI simulations, where mesh motion is involved, a considerable amount of computational time is necessary for the mesh handling and deformation as well as the related unsteady flow field resolution. Therefore this kind of CFD computations are still unusual and mostly performed during trouble-shooting analysis rather than in the standard development process, i.e. in order to understand what went wrong instead of preventing failure or even better to increase the available knowledge. In this paper the application of an efficient and particularly robust algorithm for fast computations with moving mesh is presented for the analysis of transient effects encountered during highly dynamic procedures in the operation of a pump-turbine, like runaway at fixed GV position and load-rejection with GV motion imposed as one-way FSI. In both cases the computations extend through the S-shape of the machine in the turbine-brake and reverse pump domain, showing that such exotic computations can be perform on a more regular base, even if quite time consuming. Beside the presentation of the procedure and global results, some highlights in the encountered flow-physics are also given.
Chai, Zhenhua; Zhao, T S
2014-07-01
In this paper, we propose a local nonequilibrium scheme for computing the flux of the convection-diffusion equation with a source term in the framework of the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). Both the Chapman-Enskog analysis and the numerical results show that, at the diffusive scaling, the present nonequilibrium scheme has a second-order convergence rate in space. A comparison between the nonequilibrium scheme and the conventional second-order central-difference scheme indicates that, although both schemes have a second-order convergence rate in space, the present nonequilibrium scheme is more accurate than the central-difference scheme. In addition, the flux computation rendered by the present scheme also preserves the parallel computation feature of the LBM, making the scheme more efficient than conventional finite-difference schemes in the study of large-scale problems. Finally, a comparison between the single-relaxation-time model and the MRT model is also conducted, and the results show that the MRT model is more accurate than the single-relaxation-time model, both in solving the convection-diffusion equation and in computing the flux.
Digital model analysis of the principal artesian aquifer, Savannah, Georgia area
Counts, H.B.; Krause, R.E.
1977-01-01
A digital model of the principal artesian aquifer has been developed for the Savannah, Georgia, area. The model simulates the response of the aquifer system to various hydrologic stresses. Model results of the water levels and water-level changes are shown on maps. Computations may be extended in time, indicating changes in pumpage were applied to the system and probable results calculated. Drawdown or water-level differences were computed, showing comparisons of different water management alternatives. (Woodard-USGS)
Measuring Technological and Content Knowledge of Undergraduate Primary Teachers in Mathematics
NASA Astrophysics Data System (ADS)
Doukakis, Spyros; Chionidou-Moskofoglou, Maria; Mangina-Phelan, Eleni; Roussos, Petros
Twenty-five final-year undergraduate students of primary education who were attending a course on mathematics education participated in a research project during the 2009 spring semester. A repeated measures experimental design was used. Quantitative data on students' computer attitudes, self-efficacy in ICT, attitudes toward educational software, and self-efficacy in maths were collected. Data analysis showed a statistically non-significant improvement on participants' computer attitudes and self-efficacy in ICT and ES, but a significant improvement of self-efficacy in mathematics.
Model studies of laser absorption computed tomography for remote air pollution measurement
NASA Technical Reports Server (NTRS)
Wolfe, D. C., Jr.; Byer, R. L.
1982-01-01
Model studies of the potential of laser absorption-computed tomography are presented which demonstrate the possibility of sensitive remote atmospheric pollutant measurements, over kilometer-sized areas, with two-dimensional resolution, at modest laser source powers. An analysis of this tomographic reconstruction process as a function of measurement SNR, laser power, range, and system geometry, shows that the system is able to yield two-dimensional maps of pollutant concentrations at ranges and resolutions superior to those attainable with existing, direct-detection laser radars.
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.
Computational modeling and analysis of thermoelectric properties of nanoporous silicon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.; Yu, Y.; Li, G., E-mail: gli@clemson.edu
2014-03-28
In this paper, thermoelectric properties of nanoporous silicon are modeled and studied by using a computational approach. The computational approach combines a quantum non-equilibrium Green's function (NEGF) coupled with the Poisson equation for electrical transport analysis, a phonon Boltzmann transport equation (BTE) for phonon thermal transport analysis and the Wiedemann-Franz law for calculating the electronic thermal conductivity. By solving the NEGF/Poisson equations self-consistently using a finite difference method, the electrical conductivity σ and Seebeck coefficient S of the material are numerically computed. The BTE is solved by using a finite volume method to obtain the phonon thermal conductivity k{sub p}more » and the Wiedemann-Franz law is used to obtain the electronic thermal conductivity k{sub e}. The figure of merit of nanoporous silicon is calculated by ZT=S{sup 2}σT/(k{sub p}+k{sub e}). The effects of doping density, porosity, temperature, and nanopore size on thermoelectric properties of nanoporous silicon are investigated. It is confirmed that nanoporous silicon has significantly higher thermoelectric energy conversion efficiency than its nonporous counterpart. Specifically, this study shows that, with a n-type doping density of 10{sup 20} cm{sup –3}, a porosity of 36% and nanopore size of 3 nm × 3 nm, the figure of merit ZT can reach 0.32 at 600 K. The results also show that the degradation of electrical conductivity of nanoporous Si due to the inclusion of nanopores is compensated by the large reduction in the phonon thermal conductivity and increase of absolute value of the Seebeck coefficient, resulting in a significantly improved ZT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Gavin Matthew; Bettencourt, Matthew Tyler; Bova, Steven W.
2015-09-01
This report provides in-depth information and analysis to help create a technical road map for developing next- generation Orogramming mocleN and runtime systemsl that support Advanced Simulation and Computing (ASC) work- load requirements. The focus herein is on 4synchronous many-task (AMT) model and runtime systems, which are of great interest in the context of "Oriascale7 computing, as they hold the promise to address key issues associated with future extreme-scale computer architectures. This report includes a thorough qualitative and quantitative examination of three best-of-class AIM] runtime systemsHCharm-HE, Legion, and Uintah, all of which are in use as part of the Centers.more » The studies focus on each of the runtimes' programmability, performance, and mutability. Through the experiments and analysis presented, several overarching Predictive Science Academic Alliance Program II (PSAAP-II) Ascl findings emerge. From a performance perspective, AIVT11runtimes show tremendous potential for addressing extreme- scale challenges. Empirical studies show an AM11 runtime can mitigate performance heterogeneity inherent to the machine itself and that Message Passing Interface (MP1) and AM11runtimes perform comparably under balanced con- ditions. From a programmability and mutability perspective however, none of the runtimes in this study are currently ready for use in developing production-ready Sandia ASCIapplications. The report concludes by recommending a co- design path forward, wherein application, programming model, and runtime system developers work together to define requirements and solutions. Such a requirements-driven co-design approach benefits the community as a whole, with widespread community engagement mitigating risk for both application developers developers. and high-performance computing inntime systein« less
Lee, Ki-Sun; Shin, Sang-Wan; Lee, Sang-Pyo; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Jeong-Yol
The purpose of this pilot study was to evaluate and compare polyetherketoneketone (PEKK) with different framework materials for implant-supported prostheses by means of a three-dimensional finite element analysis (3D-FEA) based on cone beam computed tomography (CBCT) and computer-aided design (CAD) data. A geometric model that consisted of four maxillary implants supporting a prosthesis framework was constructed from CBCT and CAD data of a treated patient. Three different materials (zirconia, titanium, and PEKK) were selected, and their material properties were simulated using FEA software in the generated geometric model. In the PEKK framework (ie, low elastic modulus) group, the stress transferred to the implant and simulated adjacent tissue was reduced when compressive stress was dominant, but increased when tensile stress was dominant. This study suggests that the shock-absorbing effects of a resilient implant-supported framework are limited in some areas and that rigid framework material shows a favorable stress distribution and safety of overall components of the prosthesis.
Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve
2011-01-01
This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.
Interaction entropy for protein-protein binding
NASA Astrophysics Data System (ADS)
Sun, Zhaoxi; Yan, Yu N.; Yang, Maoyou; Zhang, John Z. H.
2017-03-01
Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interaction entropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interaction entropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.
Interaction entropy for protein-protein binding.
Sun, Zhaoxi; Yan, Yu N; Yang, Maoyou; Zhang, John Z H
2017-03-28
Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interactionentropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interactionentropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.
Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.
2005-01-01
A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.
Computer-aided design of the human aortic root.
Ovcharenko, E A; Klyshnikov, K U; Vlad, A R; Sizova, I N; Kokov, A N; Nushtaev, D V; Yuzhalin, A E; Zhuravleva, I U
2014-11-01
The development of computer-based 3D models of the aortic root is one of the most important problems in constructing the prostheses for transcatheter aortic valve implantation. In the current study, we analyzed data from 117 patients with and without aortic valve disease and computed tomography data from 20 patients without aortic valvular diseases in order to estimate the average values of the diameter of the aortic annulus and other aortic root parameters. Based on these data, we developed a 3D model of human aortic root with unique geometry. Furthermore, in this study we show that by applying different material properties to the aortic annulus zone in our model, we can significantly improve the quality of the results of finite element analysis. To summarize, here we present four 3D models of human aortic root with unique geometry based on computational analysis of ECHO and CT data. We suggest that our models can be utilized for the development of better prostheses for transcatheter aortic valve implantation. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chertkov, Yu B.; Disyuk, V. V.; Pimenov, E. Yu; Aksenova, N. V.
2017-01-01
Within the framework of research in possibility and prospects of power density equalization in boiling water reactors (as exemplified by WB-50) a work was undertaken to improve prior computational model of the WB-50 reactor implemented in MCU-RR software. Analysis of prior works showed that critical state calculations have deviation of calculated reactivity exceeding ±0.3 % (ΔKef/Kef) for minimum concentrations of boric acid in the reactor water and reaching 2 % for maximum concentration values. Axial coefficient of nonuniform burnup distribution reaches high values in the WB-50 reactor. Thus, the computational model needed refinement to take into account burnup inhomogeneity along the fuel assembly height. At this stage, computational results with mean square deviation of less than 0.7 % (ΔKef/Kef) and dispersion of design values of ±1 % (ΔK/K) shall be deemed acceptable. Further lowering of these parameters apparently requires root cause analysis of such large values and paying more attention to experimental measurement techniques.
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
Coupled Finite Volume and Finite Element Method Analysis of a Complex Large-Span Roof Structure
NASA Astrophysics Data System (ADS)
Szafran, J.; Juszczyk, K.; Kamiński, M.
2017-12-01
The main goal of this paper is to present coupled Computational Fluid Dynamics and structural analysis for the precise determination of wind impact on internal forces and deformations of structural elements of a longspan roof structure. The Finite Volume Method (FVM) serves for a solution of the fluid flow problem to model the air flow around the structure, whose results are applied in turn as the boundary tractions in the Finite Element Method problem structural solution for the linear elastostatics with small deformations. The first part is carried out with the use of ANSYS 15.0 computer system, whereas the FEM system Robot supports stress analysis in particular roof members. A comparison of the wind pressure distribution throughout the roof surface shows some differences with respect to that available in the engineering designing codes like Eurocode, which deserves separate further numerical studies. Coupling of these two separate numerical techniques appears to be promising in view of future computational models of stochastic nature in large scale structural systems due to the stochastic perturbation method.
Roles for Agent Assistants in Field Science: Understanding Personal Projects and Collaboration
NASA Technical Reports Server (NTRS)
Clancey, William J.
2003-01-01
A human-centered approach to computer systems design involves reframing analysis in terms of the people interacting with each other. The primary concern is not how people can interact with computers, but how shall we design work systems (facilities, tools, roles, and procedures) to help people pursue their personal projects, as they work independently and collaboratively? Two case studies provide empirical requirements. First, an analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse. Second, an analysis of biologists and a geologist working at Haughton Crater in the High Canadian Arctic reveals how work interactions between people involve independent personal projects, sensitively coordinated for mutual benefit. In both cases, an agent or robotic system's role would be to assist people, rather than collaborating, because today's computer systems lack the identity and purpose that consciousness provides.
A Computational Approach for Probabilistic Analysis of LS-DYNA Water Impact Simulations
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.
2010-01-01
NASA s development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. Because of the computational cost, these tools are often used to evaluate specific conditions and rarely used for statistical analysis. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. For this problem, response surface models are used to predict the system time responses to a water landing as a function of capsule speed, direction, attitude, water speed, and water direction. Furthermore, these models can also be used to ascertain the adequacy of the design in terms of probability measures. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.
Re-Entry Aeroheating Analysis of Tile-Repair Augers for the Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Mazaheri, Ali R.; Wood, William A.
2007-01-01
Computational re-entry aerothermodynamic analysis of the Space Shuttle Orbiter s tile overlay repair (TOR) sub-assembly is presented. Entry aeroheating analyses are conducted to characterize the aerothermodynamic environment of the TOR and to provide necessary inputs for future TOR thermal and structural analyses. The TOR sub-assembly consists of a thin plate and several augers and spacers that serve as the TOR fasteners. For the computational analysis, the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) is used. A 5-species non-equilibrium chemistry model with a finite rate catalytic recombination model and a radiation equilibrium wall condition are used. It is assumed that wall properties are the same as reaction cured glass (RCG) properties with a surface emissivity of epsilon = 0.89. Surface heat transfer rates for the TOR and tile repair augers (TRA) are computed at a STS-107 trajectory point corresponding to Mach 18 free stream conditions. Computational results show that the average heating bump factor (BF), which is a ratio of local heat transfer rate to a design reference point located at the damage site, for the auger head alone is about 1.9. It is also shown that the average BF for the combined auger and washer heads is about 2.0.
NASA Astrophysics Data System (ADS)
Thomas, Siby; Ajith, K. M.; Valsakumar, M. C.
2017-06-01
The major objective of this work is to present results of a classical molecular dynamics study to investigate the effect of changing the cut-off distance in the empirical potential on the stress-strain relation and also the temperature dependent Young’s modulus of pristine and defective hexagonal boron nitride. As the temperature increases, the computed Young’s modulus shows a significant decrease along both the armchair and zigzag directions. The computed Young’s modulus shows a trend in keeping with the structural anisotropy of h-BN. The variation of Young’s modulus with system size is elucidated. The observed mechanical strength of h-BN is significantly affected by the vacancy and Stone-Wales type defects. The computed room temperature Young’s modulus of pristine h-BN is 755 GPa and 769 GPa respectively along the armchair and zigzag directions. The decrease of Young’s modulus with increase in temperature has been analyzed and the results show that the system with zigzag edge shows a higher value of Young’s modulus in comparison to that with armchair edge. As the temperature increases, the computed stiffness decreases and the system with zigzag edge possesses a higher value of stiffness as compared to the armchair counterpart and this behaviour is consistent with the variation of Young’s modulus. The defect analysis shows that presence of vacancy type defects leads to a higher Young’s modulus, in the studied range with different percentage of defect concentration, in comparison with Stone-Wales defect. The variations in the peak position of the computed radial distribution function reveals the changes in the structural features of systems with zigzag and armchair edges in the presence of applied stress.
Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches
NASA Astrophysics Data System (ADS)
Duchaineau, Mark
2001-06-01
Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.
Verhoef, Bram-Ernst; Bohon, Kaitlin S.
2015-01-01
Binocular disparity is a powerful depth cue for object perception. The computations for object vision culminate in inferior temporal cortex (IT), but the functional organization for disparity in IT is unknown. Here we addressed this question by measuring fMRI responses in alert monkeys to stimuli that appeared in front of (near), behind (far), or at the fixation plane. We discovered three regions that showed preferential responses for near and far stimuli, relative to zero-disparity stimuli at the fixation plane. These “near/far” disparity-biased regions were located within dorsal IT, as predicted by microelectrode studies, and on the posterior inferotemporal gyrus. In a second analysis, we instead compared responses to near stimuli with responses to far stimuli and discovered a separate network of “near” disparity-biased regions that extended along the crest of the superior temporal sulcus. We also measured in the same animals fMRI responses to faces, scenes, color, and checkerboard annuli at different visual field eccentricities. Disparity-biased regions defined in either analysis did not show a color bias, suggesting that disparity and color contribute to different computations within IT. Scene-biased regions responded preferentially to near and far stimuli (compared with stimuli without disparity) and had a peripheral visual field bias, whereas face patches had a marked near bias and a central visual field bias. These results support the idea that IT is organized by a coarse eccentricity map, and show that disparity likely contributes to computations associated with both central (face processing) and peripheral (scene processing) visual field biases, but likely does not contribute much to computations within IT that are implicated in processing color. PMID:25926470
Gopal, Sandeep; Pocock, Roger
2018-04-19
The Caenorhabditis elegans (C. elegans) germline is used to study several biologically important processes including stem cell development, apoptosis, and chromosome dynamics. While the germline is an excellent model, the analysis is often two dimensional due to the time and labor required for three-dimensional analysis. Major readouts in such studies are the number/position of nuclei and protein distribution within the germline. Here, we present a method to perform automated analysis of the germline using confocal microscopy and computational approaches to determine the number and position of nuclei in each region of the germline. Our method also analyzes germline protein distribution that enables the three-dimensional examination of protein expression in different genetic backgrounds. Further, our study shows variations in cytoskeletal architecture in distinct regions of the germline that may accommodate specific spatial developmental requirements. Finally, our method enables automated counting of the sperm in the spermatheca of each germline. Taken together, our method enables rapid and reproducible phenotypic analysis of the C. elegans germline.
Finite-data-size study on practical universal blind quantum computation
NASA Astrophysics Data System (ADS)
Zhao, Qiang; Li, Qiong
2018-07-01
The universal blind quantum computation with weak coherent pulses protocol is a practical scheme to allow a client to delegate a computation to a remote server while the computation hidden. However, in the practical protocol, a finite data size will influence the preparation efficiency in the remote blind qubit state preparation (RBSP). In this paper, a modified RBSP protocol with two decoy states is studied in the finite data size. The issue of its statistical fluctuations is analyzed thoroughly. The theoretical analysis and simulation results show that two-decoy-state case with statistical fluctuation is closer to the asymptotic case than the one-decoy-state case with statistical fluctuation. Particularly, the two-decoy-state protocol can achieve a longer communication distance than the one-decoy-state case in this statistical fluctuation situation.
Remote sensing image ship target detection method based on visual attention model
NASA Astrophysics Data System (ADS)
Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong
2017-11-01
The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.
Analysis OpenMP performance of AMD and Intel architecture for breaking waves simulation using MPS
NASA Astrophysics Data System (ADS)
Alamsyah, M. N. A.; Utomo, A.; Gunawan, P. H.
2018-03-01
Simulation of breaking waves by using Navier-Stokes equation via moving particle semi-implicit method (MPS) over close domain is given. The results show the parallel computing on multicore architecture using OpenMP platform can reduce the computational time almost half of the serial time. Here, the comparison using two computer architectures (AMD and Intel) are performed. The results using Intel architecture is shown better than AMD architecture in CPU time. However, in efficiency, the computer with AMD architecture gives slightly higher than the Intel. For the simulation by 1512 number of particles, the CPU time using Intel and AMD are 12662.47 and 28282.30 respectively. Moreover, the efficiency using similar number of particles, AMD obtains 50.09 % and Intel up to 49.42 %.
Capturing and analyzing wheelchair maneuvering patterns with mobile cloud computing.
Fu, Jicheng; Hao, Wei; White, Travis; Yan, Yuqing; Jones, Maria; Jan, Yih-Kuen
2013-01-01
Power wheelchairs have been widely used to provide independent mobility to people with disabilities. Despite great advancements in power wheelchair technology, research shows that wheelchair related accidents occur frequently. To ensure safe maneuverability, capturing wheelchair maneuvering patterns is fundamental to enable other research, such as safe robotic assistance for wheelchair users. In this study, we propose to record, store, and analyze wheelchair maneuvering data by means of mobile cloud computing. Specifically, the accelerometer and gyroscope sensors in smart phones are used to record wheelchair maneuvering data in real-time. Then, the recorded data are periodically transmitted to the cloud for storage and analysis. The analyzed results are then made available to various types of users, such as mobile phone users, traditional desktop users, etc. The combination of mobile computing and cloud computing leverages the advantages of both techniques and extends the smart phone's capabilities of computing and data storage via the Internet. We performed a case study to implement the mobile cloud computing framework using Android smart phones and Google App Engine, a popular cloud computing platform. Experimental results demonstrated the feasibility of the proposed mobile cloud computing framework.
Turbulence modeling of free shear layers for high performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas
1993-01-01
In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.
Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems
Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.
2014-01-01
The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545
Fuchs, Lynn S.; Fuchs, Douglas; Hamlett, Carol L.; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M.
2009-01-01
The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed. PMID:20057912
Advances in computer-aided well-test interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, R.N.
1994-07-01
Despite the feeling expressed several times over the past 40 years that well-test analysis had reached it peak development, an examination of recent advances shows continuous expansion in capability, with future improvement likely. The expansion in interpretation capability over the past decade arose mainly from the development of computer-aided techniques, which, although introduced 20 years ago, have come into use only recently. The broad application of computer-aided interpretation originated with the improvement of the methodologies and continued with the expansion in computer access and capability that accompanied the explosive development of the microcomputer industry. This paper focuses on the differentmore » pieces of the methodology that combine to constitute a computer-aided interpretation and attempts to compare some of the approaches currently used. Future directions of the approach are also discussed. The separate areas discussed are deconvolution, pressure derivatives, model recognition, nonlinear regression, and confidence intervals.« less
Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.
Eddy, Sean R
2014-01-01
Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.
Is problematic internet use an indicator of eating disorders among Turkish university students?
Çelik, Çiğdem Berber; Odacı, Hatice; Bayraktar, Nihal
2015-06-01
The aim of this study was to investigate the relationship between problematic internet use and eating attitudes in a group of university students. The study sample consisted of 314 students attending programs at the faculties of education, medicine and communications at the Karadeniz Technical University in Turkey. One hundred forty-seven (46.8 %) were male and 167 (53.2 %) female. The Problematic Internet Use Scale was used to measure problematic internet use levels among university students and the Eating Attitudes Test to determine anorexia nervosa symptoms. Additionally, a Personal Data Form was used to determine age, gender, faculty attended and computer ownership. Data were analyzed on SPSS 15.00. Pearson's product moments correlation coefficient, multiple linear regression analysis, the independent t test and one-way ANOVA were used for data analysis. The research findings showed that 46.8 % of students were female and 53.2 % male. Mean age was 20.65 (SD 1.42). Analysis showed a significant positive correlation between problematic internet use and eating attitudes (r = 0.77, p < 0.01). Problematic internet use was found to be a significant predictor of eating attitudes. The results also showed a significant difference in problematic internet use with regard to program variables [F (2,311) = 102.79]. There were no significant differences in problematic internet use in terms of gender or computer ownership. The results of this study indicate that problematic internet use is significantly correlated with eating disorders, that problematic internet use does not vary on the basis of gender or computer ownership and that variations arise in problematic internet use depending on the faculty attended.
Logic circuits from zero forcing.
Burgarth, Daniel; Giovannetti, Vittorio; Hogben, Leslie; Severini, Simone; Young, Michael
We design logic circuits based on the notion of zero forcing on graphs; each gate of the circuits is a gadget in which zero forcing is performed. We show that such circuits can evaluate every monotone Boolean function. By using two vertices to encode each logical bit, we obtain universal computation. We also highlight a phenomenon of "back forcing" as a property of each function. Such a phenomenon occurs in a circuit when the input of gates which have been already used at a given time step is further modified by a computation actually performed at a later stage. Finally, we show that zero forcing can be also used to implement reversible computation. The model introduced here provides a potentially new tool in the analysis of Boolean functions, with particular attention to monotonicity. Moreover, in the light of applications of zero forcing in quantum mechanics, the link with Boolean functions may suggest a new directions in quantum control theory and in the study of engineered quantum spin systems. It is an open technical problem to verify whether there is a link between zero forcing and computation with contact circuits.
A comparison of the postures assumed when using laptop computers and desktop computers.
Straker, L; Jones, K J; Miller, J
1997-08-01
This study evaluated the postural implications of using a laptop computer. Laptop computer screens and keyboards are joined, and are therefore unable to be adjusted separately in terms of screen height and distance, and keyboard height and distance. The posture required for their use is likely to be constrained, as little adjustment can be made for the anthropometric differences of users. In addition to the postural constraints, the study looked at discomfort levels and performance when using laptops as compared with desktops. Statistical analysis showed significantly greater neck flexion and head tilt with laptop use. The other body angles measured (trunk, shoulder, elbow, wrist, and scapula and neck protraction/retraction) showed no statistical differences. The average discomfort experienced after using the laptop for 20 min, although appearing greater than the discomfort experienced after using the desktop, was not significantly greater. When using the laptop, subjects tended to perform better than when using the desktop, though not significantly so. Possible reasons for the results are discussed and implications of the findings outlined.
Mohammadi, Mehrnoosh; RezaeiDehaghani, Abdollah; Mehrabi, Tayebeh; RezaeiDehaghani, Ali
2016-01-01
Background: As adolescents spend much time on playing computer games, their mental and social effects should be considered. The present study aimed to investigate the association between playing computer games and the mental and social health among male adolescents in Iran in 2014. Materials and Methods: This is a cross-sectional study conducted on 210 adolescents selected by multi-stage random sampling. Data were collected by Goldberg and Hillier general health (28 items) and Kiez social health questionnaires. The association was tested by Pearson and Spearman correlation coefficients, one-way analysis of variance (ANOVA), and independent t-test. Computer games related factors such as the location, type, length, the adopted device, and mode of playing games were investigated. Results: Results showed that 58.9% of the subjects played games on a computer alone for 1 h at home. Results also revealed that the subjects had appropriate mental health and 83.2% had moderate social health. Results showed a poor significant association between the length of games and social health (r = −0.15, P = 0.03), the type of games and mental health (r = −0.16, P = 0.01), and the device used in playing games and social health (F = 0.95, P = 0.03). Conclusions: The findings showed that adolescents’ mental and social health is negatively associated with their playing computer games. Therefore, to promote their health, educating them about the correct way of playing computer games is essential and their parents and school authorities, including nurses working at schools, should determine its relevant factors such as the type, length, and device used in playing such games. PMID:27095988
Influence of computer work under time pressure on cardiac activity.
Shi, Ping; Hu, Sijung; Yu, Hongliu
2015-03-01
Computer users are often under stress when required to complete computer work within a required time. Work stress has repeatedly been associated with an increased risk for cardiovascular disease. The present study examined the effects of time pressure workload during computer tasks on cardiac activity in 20 healthy subjects. Heart rate, time domain and frequency domain indices of heart rate variability (HRV) and Poincaré plot parameters were compared among five computer tasks and two rest periods. Faster heart rate and decreased standard deviation of R-R interval were noted in response to computer tasks under time pressure. The Poincaré plot parameters showed significant differences between different levels of time pressure workload during computer tasks, and between computer tasks and the rest periods. In contrast, no significant differences were identified for the frequency domain indices of HRV. The results suggest that the quantitative Poincaré plot analysis used in this study was able to reveal the intrinsic nonlinear nature of the autonomically regulated cardiac rhythm. Specifically, heightened vagal tone occurred during the relaxation computer tasks without time pressure. In contrast, the stressful computer tasks with added time pressure stimulated cardiac sympathetic activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Electronic Circuit Analysis Language (ECAL)
NASA Astrophysics Data System (ADS)
Chenghang, C.
1983-03-01
The computer aided design technique is an important development in computer applications and it is an important component of computer science. The special language for electronic circuit analysis is the foundation of computer aided design or computer aided circuit analysis (abbreviated as CACD and CACA) of simulated circuits. Electronic circuit analysis language (ECAL) is a comparatively simple and easy to use circuit analysis special language which uses the FORTRAN language to carry out the explanatory executions. It is capable of conducting dc analysis, ac analysis, and transient analysis of a circuit. Futhermore, the results of the dc analysis can be used directly as the initial conditions for the ac and transient analyses.
Foster, Scott D; Feutry, Pierre; Grewe, Peter M; Berry, Oliver; Hui, Francis K C; Davies, Campbell R
2018-06-26
Delineating naturally occurring and self-sustaining sub-populations (stocks) of a species is an important task, especially for species harvested from the wild. Despite its central importance to natural resource management, analytical methods used to delineate stocks are often, and increasingly, borrowed from superficially similar analytical tasks in human genetics even though models specifically for stock identification have been previously developed. Unfortunately, the analytical tasks in resource management and human genetics are not identical { questions about humans are typically aimed at inferring ancestry (often referred to as 'admixture') rather than breeding stocks. In this article, we argue, and show through simulation experiments and an analysis of yellowfin tuna data, that ancestral analysis methods are not always appropriate for stock delineation. In this work, we advocate a variant of a previouslyintroduced and simpler model that identifies stocks directly. We also highlight that the computational aspects of the analysis, irrespective of the model, are difficult. We introduce some alternative computational methods and quantitatively compare these methods to each other and to established methods. We also present a method for quantifying uncertainty in model parameters and in assignment probabilities. In doing so, we demonstrate that point estimates can be misleading. One of the computational strategies presented here, based on an expectation-maximisation algorithm with judiciously chosen starting values, is robust and has a modest computational cost. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Solid–Liquid Phase Change Driven by Internal Heat Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Crepeau; Ali s. Siahpush
2012-07-01
This article presents results of solid-liquid phase change, the Stefan Problem, where melting is driven internal heat generation, in a cylindrical geometry. The comparison between a quasi-static analytical solution for Stefan numbers less than one and numerical solutions shows good agreement. The computational results of phase change with internal heat generation show how convection cells form in the liquid region. A scale analysis of the same problem shows four distinct regions of the melting process.
Quantum Attack-Resistent Certificateless Multi-Receiver Signcryption Scheme
Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong
2013-01-01
The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards. PMID:23967037
Structural Acoustic Physics Based Modeling of Curved Composite Shells
2017-09-19
Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION
Koperwhats, Martha A; Chang, Wei-Chih; Xiao, Jianguo
2002-01-01
Digital imaging technology promises efficient, economical, and fast service for patient care, but the challenges are great in the transition from film to a filmless (digital) environment. This change has a significant impact on the film library's personnel (film librarians) who play a leading roles in storage, classification, and retrieval of images. The objectives of this project were to study film library errors and the usability of a physical computerized system that could not be changed, while developing an intervention to reduce errors and test the usability of the intervention. Cognitive and human factors analysis were used to evaluate human-computer interaction. A workflow analysis was performed to understand the film and digital imaging processes. User and task analyses were applied to account for all behaviors involved in interaction with the system. A heuristic evaluation was used to probe the usability issues in the picture archiving and communication systems (PACS) modules. Simplified paper-based instructions were designed to familiarize the film librarians with the digital system. A usability survey evaluated the effectiveness of the instruction. The user and task analyses indicated that different users faced challenges based on their computer literacy, education, roles, and frequency of use of diagnostic imaging. The workflow analysis showed that the approaches to using the digital library differ among the various departments. The heuristic evaluation of the PACS modules showed the human-computer interface to have usability issues that prevented easy operation. Simplified instructions were designed for operation of the modules. Usability surveys conducted before and after revision of the instructions showed that performance improved. Cognitive and human factor analysis can help film librarians and other users adapt to the filmless system. Use of cognitive science tools will aid in successful transition of the film library from a film environment to a digital environment.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
NASA Astrophysics Data System (ADS)
Wang, Lusheng; Yang, Yong; Lin, Guohui
Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.
A computational workflow for designing silicon donor qubits
Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...
2016-09-19
Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less
Aeroelastic Modeling of a Nozzle Startup Transient
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen
2014-01-01
Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a tightly coupled aeroelastic modeling algorithm by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed under the framework of modal analysis. Transient aeroelastic nozzle startup analyses at sea level were performed, and the computed transient nozzle fluid-structure interaction physics presented,
Internet messenger based smart virtual class learning using ubiquitous computing
NASA Astrophysics Data System (ADS)
Umam, K.; Mardi, S. N. S.; Hariadi, M.
2017-06-01
Internet messenger (IM) has become an important educational technology component in college education, IM makes it possible for students to engage in learning and collaborating at smart virtual class learning (SVCL) using ubiquitous computing. However, the model of IM-based smart virtual class learning using ubiquitous computing and empirical evidence that would favor a broad application to improve engagement and behavior are still limited. In addition, the expectation that IM based SVCL using ubiquitous computing could improve engagement and behavior on smart class cannot be confirmed because the majority of the reviewed studies followed instructions paradigms. This article aims to present the model of IM-based SVCL using ubiquitous computing and showing learners’ experiences in improved engagement and behavior for learner-learner and learner-lecturer interactions. The method applied in this paper includes design process and quantitative analysis techniques, with the purpose of identifying scenarios of ubiquitous computing and realize the impressions of learners and lecturers about engagement and behavior aspect and its contribution to learning
De Santis, Daniele; Canton, Luciano Claudio; Cucchi, Alessandro; Zanotti, Guglielmo; Pistoia, Enrico; Nocini, Pier Francesco
2010-01-01
Computer-assisted surgery is based on computerized tomography (CT) scan technology to plan the placement of dental implants and a computer-aided design/computer-aided manufacturing (CAD-CAM) technology to create a custom surgical template. It provides guidance for insertion implants after analysis of existing alveolar bone and planning of implant position, which can be immediately loaded, therefore achieving esthetic and functional results in a surgical stage. The absence of guidelines to treat dentulous areas is often due to a lack of computer-assisted surgery. The authors have attempted to use this surgical methodology to replace residual teeth with an immediate implantoprosthetic restoration. The aim of this case report is to show the possibility of treating a dentulous patient by applying a computer-assisted surgical protocol associated with the use of a double surgical template: one before extraction and a second one after extraction of selected teeth.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2017-03-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2016-01-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829
Toward Interactive Scenario Analysis and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gayle, Thomas R.; Summers, Kenneth Lee; Jungels, John
2015-01-01
As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increasemore » the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.« less
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; ...
2017-11-23
Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
NASA Astrophysics Data System (ADS)
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert
2017-10-01
The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.
Shadish, William R; Hedges, Larry V; Pustejovsky, James E
2014-04-01
This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo
Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less
Visual Analysis of Cloud Computing Performance Using Behavioral Lines.
Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu
2016-02-29
Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.
How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling
Veale, Richard; Hafed, Ziad M.
2017-01-01
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044023
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Computer analysis of three-dimensional morphological characteristics of the bile duct
NASA Astrophysics Data System (ADS)
Ma, Jinyuan; Chen, Houjin; Peng, Yahui; Shang, Hua
2017-01-01
In this paper, a computer image-processing algorithm for analyzing the morphological characteristics of bile ducts in Magnetic Resonance Cholangiopancreatography (MRCP) images was proposed. The algorithm consisted of mathematical morphology methods including erosion, closing and skeletonization, and a spline curve fitting method to obtain the length and curvature of the center line of the bile duct. Of 10 cases, the average length of the bile duct was 14.56 cm. The maximum curvature was in the range of 0.111 2.339. These experimental results show that using the computer image-processing algorithm to assess the morphological characteristics of the bile duct is feasible and further research is needed to evaluate its potential clinical values.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne
2003-01-01
A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.
Infrared Ship Classification Using A New Moment Pattern Recognition Concept
NASA Astrophysics Data System (ADS)
Casasent, David; Pauly, John; Fetterly, Donald
1982-03-01
An analysis of the statistics of the moments and the conventional invariant moments shows that the variance of the latter become quite large as the order of the moments and the degree of invariance increases. Moreso, the need to whiten the error volume increases with the order and degree, but so does the computational load associated with computing the whitening operator. We thus advance a new estimation approach to the use of moments in pattern recog-nition that overcomes these problems. This work is supported by experimental verification and demonstration on an infrared ship pattern recognition problem. The computational load associated with our new algorithm is also shown to be very low.
Research in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
NASA Technical Reports Server (NTRS)
Vorosmarty, C.; Grace, A.; Moore, B.; Choudhury, B.; Willmott, C. J.
1990-01-01
A strategy is presented for integrating scanning multichannel microwave radiometer data from the Nimbus-7 satellite with meteorological station records and computer simulations of land surface hydrology, terrestrial nutrient cycling, and trace gas emission. Analysis of the observations together with radiative transfer analysis shows that in the tropics the temporal and spatial variations of the polarization difference are determined primarily by the structure and phenology of vegetation and seasonal inundations of major rivers and wetlands. It is concluded that the proposed surface hydrology model, along with climatological records, and, potentially, 37-GHz data for phenology, will provide inputs to a terrestrial ecosystem model that predicts regional net primary production and CO2 gas exchange.
Computational fluid dynamics analysis of a maglev centrifugal left ventricular assist device.
Burgreen, Greg W; Loree, Howard M; Bourque, Kevin; Dague, Charles; Poirier, Victor L; Farrar, David; Hampton, Edward; Wu, Z Jon; Gempp, Thomas M; Schöb, Reto
2004-10-01
The fluid dynamics of the Thoratec HeartMate III (Thoratec Corp., Pleasanton, CA, U.S.A.) left ventricular assist device are analyzed over a range of physiological operating conditions. The HeartMate III is a centrifugal flow pump with a magnetically suspended rotor. The complete pump was analyzed using computational fluid dynamics (CFD) analysis and experimental particle imaging flow visualization (PIFV). A comparison of CFD predictions to experimental imaging shows good agreement. Both CFD and experimental PIFV confirmed well-behaved flow fields in the main components of the HeartMate III pump: inlet, volute, and outlet. The HeartMate III is shown to exhibit clean flow features and good surface washing across its entire operating range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
López C, Diana C.; Wozny, Günter; Flores-Tlacuahuac, Antonio
2016-03-23
The lack of informative experimental data and the complexity of first-principles battery models make the recovery of kinetic, transport, and thermodynamic parameters complicated. We present a computational framework that combines sensitivity, singular value, and Monte Carlo analysis to explore how different sources of experimental data affect parameter structural ill conditioning and identifiability. Our study is conducted on a modified version of the Doyle-Fuller-Newman model. We demonstrate that the use of voltage discharge curves only enables the identification of a small parameter subset, regardless of the number of experiments considered. Furthermore, we show that the inclusion of a single electrolyte concentrationmore » measurement significantly aids identifiability and mitigates ill-conditioning.« less
NASA Astrophysics Data System (ADS)
de Oliveira, José Martins, Jr.; Mangini, F. Salvador; Carvalho Vila, Marta Maria Duarte; ViníciusChaud, Marco
2013-05-01
This work presents an alternative and non-conventional technique for evaluatingof physic-chemical properties of pharmaceutical dosage forms, i.e. we used computed tomography (CT) technique as a nondestructive technique to visualize internal structures of pharmaceuticals dosage forms and to conduct static and dynamical studies. The studies were conducted involving static and dynamic situations through the use of tomographic images, generated by the scanner at University of Sorocaba - Uniso. We have shown that through the use of tomographic images it is possible to conduct studies of porosity, densities, analysis of morphological parameters and performing studies of dissolution. Our results are in agreement with the literature, showing that CT is a powerful tool for use in the pharmaceutical sciences.
Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.
Kiran Kumar, G R; Reddy, M Ramasubba
2018-06-08
Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
1976-02-18
shows three different body-fixed Cartesian coordinate systems used in the present analysis . The Cartesian coordinate system with the axes x, y, and z... using the analysis of the previous section. A different situation exists when the base pressure is greater than the ambient value. Now it becomes... USED IN THE PRESENT ANALYSIS Figure 26. Computational model used in Section!!. D. 85 FIN BODY 00 C> Z t t Voa (b) FLOW FOR V oa z
The application of digital techniques to the analysis of metallurgical experiments
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1977-01-01
The application of a specific digital computer system (known as the Image Data Processing System) to the analysis of three NASA-sponsored metallurgical experiments is discussed in some detail. The basic hardware and software components of the Image Data Processing System are presented. Many figures are presented in the discussion of each experimental analysis in an attempt to show the accuracy and speed that the Image Data Processing System affords in analyzing photographic images dealing with metallurgy, and in particular with material processing.
Vision-sensing image analysis for GTAW process control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, D.D.
1994-11-01
Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.
Non-Newtonian Liquid Flow through Small Diameter Piping Components: CFD Analysis
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Tarun Kanti; Das, Sudip Kumar
2016-10-01
Computational Fluid Dynamics (CFD) analysis have been carried out to evaluate the frictional pressure drop across the horizontal pipeline and different piping components, like elbows, orifices, gate and globe valves for non-Newtonian liquid through 0.0127 m pipe line. The mesh generation is done using GAMBIT 6.3 and FLUENT 6.3 is used for CFD analysis. The CFD results are verified with our earlier published experimental data. The CFD results show the very good agreement with the experimental values.
CMG-biotools, a free workbench for basic comparative microbial genomics.
Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David
2013-01-01
Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training.
Thermal response of Space Shuttle wing during reentry heating
NASA Technical Reports Server (NTRS)
Gong, L.; Ko, W. L.; Quinn, R. D.
1984-01-01
A structural performance and resizing (SPAR) finite element thermal analysis computer program was used in the heat transfer analysis of the space shuttle orbiter that was subjected to reentry aerodynamic heatings. One wing segment of the right wing (WS 240) and the whole left wing were selected for the thermal analysis. Results showed that the predicted thermal protection system (TPS) temperatures were in good agreement with the space transportation system, trajectory 5 (STS-5) flight-measured temperatures. In addition, calculated aluminum structural temperatures were in fairly good agreement with the flight data up to the point of touchdown. Results also showed that the internal free convection had a considerable effect on the change of structural temperatures after touchdown.
Improved Helicopter Rotor Performance Prediction through Loose and Tight CFD/CSD Coupling
NASA Astrophysics Data System (ADS)
Ickes, Jacob C.
Helicopters and other Vertical Take-Off or Landing (VTOL) vehicles exhibit an interesting combination of structural dynamic and aerodynamic phenomena which together drive the rotor performance. The combination of factors involved make simulating the rotor a challenging and multidisciplinary effort, and one which is still an active area of interest in the industry because of the money and time it could save during design. Modern tools allow the prediction of rotorcraft physics from first principles. Analysis of the rotor system with this level of accuracy provides the understanding necessary to improve its performance. There has historically been a divide between the comprehensive codes which perform aeroelastic rotor simulations using simplified aerodynamic models, and the very computationally intensive Navier-Stokes Computational Fluid Dynamics (CFD) solvers. As computer resources become more available, efforts have been made to replace the simplified aerodynamics of the comprehensive codes with the more accurate results from a CFD code. The objective of this work is to perform aeroelastic rotorcraft analysis using first-principles simulations for both fluids and structural predictions using tools available at the University of Toledo. Two separate codes are coupled together in both loose coupling (data exchange on a periodic interval) and tight coupling (data exchange each time step) schemes. To allow the coupling to be carried out in a reliable and efficient way, a Fluid-Structure Interaction code was developed which automatically performs primary functions of loose and tight coupling procedures. Flow phenomena such as transonics, dynamic stall, locally reversed flow on a blade, and Blade-Vortex Interaction (BVI) were simulated in this work. Results of the analysis show aerodynamic load improvement due to the inclusion of the CFD-based airloads in the structural dynamics analysis of the Computational Structural Dynamics (CSD) code. Improvements came in the form of improved peak/trough magnitude prediction, better phase prediction of these locations, and a predicted signal with a frequency content more like the flight test data than the CSD code acting alone. Additionally, a tight coupling analysis was performed as a demonstration of the capability and unique aspects of such an analysis. This work shows that away from the center of the flight envelope, the aerodynamic modeling of the CSD code can be replaced with a more accurate set of predictions from a CFD code with an improvement in the aerodynamic results. The better predictions come at substantially increased computational costs between 1,000 and 10,000 processor-hours.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.
Characterization of microgravity effects on bone structure and strength using fractal analysis
NASA Technical Reports Server (NTRS)
Acharya, Raj S.; Shackelford, Linda
1995-01-01
The effect of micro-gravity on the musculoskeletal system has been well studied. Significant changes in bone and muscle have been shown after long term space flight. Similar changes have been demonstrated due to bed rest. Bone demineralization is particularly profound in weight bearing bones. Much of the current techniques to monitor bone condition use bone mass measurements. However, bone mass measurements are not reliable to distinguish Osteoporotic and Normal subjects. It has been shown that the overlap between normals and osteoporosis is found for all of the bone mass measurement technologies: single and dual photon absorptiometry, quantitative computed tomography and direct measurement of bone area/volume on biopsy as well as radiogrammetry. A similar discordance is noted in the fact that it has not been regularly possible to find the expected correlation between severity of osteoporosis and degree of bone loss. Structural parameters such as trabecular connectivity have been proposed as features for assessing bone conditions. In this report, we use fractal analysis to characterize bone structure. We show that the fractal dimension computed with MRI images and X-Ray images of the patella are the same. Preliminary experimental results show that the fractal dimension computed from MRI images of vertebrae of human subjects before bedrest is higher than during bedrest.
Computational analysis for biodegradation of exogenously depolymerizable polymer
NASA Astrophysics Data System (ADS)
Watanabe, M.; Kawai, F.
2018-03-01
This study shows that microbial growth and decay in a biodegradation process of exogenously depolymerizable polymer are controlled by consumption of monomer units. Experimental outcomes for residual polymer were incorporated in inverse analysis for a degradation rate. The Gauss-Newton method was applied to an inverse problem for two parameter values associated with the microbial population. A biodegradation process of polyethylene glycol was analyzed numerically, and numerical outcomes were obtained.
Ali, Syed Mashhood; Shamim, Shazia
2015-07-01
Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.
MicroCT parameters for multimaterial elements assessment
NASA Astrophysics Data System (ADS)
de Araújo, Olga M. O.; Silva Bastos, Jaqueline; Machado, Alessandra S.; dos Santos, Thaís M. P.; Ferreira, Cintia G.; Rosifini Alves Claro, Ana Paula; Lopes, Ricardo T.
2018-03-01
Microtomography is a non-destructive testing technique for quantitative and qualitative analysis. The investigation of multimaterial elements with great difference of density can result in artifacts that degrade image quality depending on combination of additional filter. The aim of this study is the selection of parameters most appropriate for analysis of bone tissue with metallic implant. The results show the simulation with MCNPX code for the distribution of energy without additional filter, with use of aluminum, copper and brass filters and their respective reconstructed images showing the importance of the choice of these parameters in image acquisition process on computed microtomography.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Boolean and brain-inspired computing using spin-transfer torque devices
NASA Astrophysics Data System (ADS)
Fan, Deliang
Several completely new approaches (such as spintronic, carbon nanotube, graphene, TFETs, etc.) to information processing and data storage technologies are emerging to address the time frame beyond current Complementary Metal-Oxide-Semiconductor (CMOS) roadmap. The high speed magnetization switching of a nano-magnet due to current induced spin-transfer torque (STT) have been demonstrated in recent experiments. Such STT devices can be explored in compact, low power memory and logic design. In order to truly leverage STT devices based computing, researchers require a re-think of circuit, architecture, and computing model, since the STT devices are unlikely to be drop-in replacements for CMOS. The potential of STT devices based computing will be best realized by considering new computing models that are inherently suited to the characteristics of STT devices, and new applications that are enabled by their unique capabilities, thereby attaining performance that CMOS cannot achieve. The goal of this research is to conduct synergistic exploration in architecture, circuit and device levels for Boolean and brain-inspired computing using nanoscale STT devices. Specifically, we first show that the non-volatile STT devices can be used in designing configurable Boolean logic blocks. We propose a spin-memristor threshold logic (SMTL) gate design, where memristive cross-bar array is used to perform current mode summation of binary inputs and the low power current mode spintronic threshold device carries out the energy efficient threshold operation. Next, for brain-inspired computing, we have exploited different spin-transfer torque device structures that can implement the hard-limiting and soft-limiting artificial neuron transfer functions respectively. We apply such STT based neuron (or 'spin-neuron') in various neural network architectures, such as hierarchical temporal memory and feed-forward neural network, for performing "human-like" cognitive computing, which show more than two orders of lower energy consumption compared to state of the art CMOS implementation. Finally, we show the dynamics of injection locked Spin Hall Effect Spin-Torque Oscillator (SHE-STO) cluster can be exploited as a robust multi-dimensional distance metric for associative computing, image/ video analysis, etc. Our simulation results show that the proposed system architecture with injection locked SHE-STOs and the associated CMOS interface circuits can be suitable for robust and energy efficient associative computing and pattern matching.
Humphries, Stephen M; Yagihashi, Kunihiro; Huckleberry, Jason; Rho, Byung-Hak; Schroeder, Joyce D; Strand, Matthew; Schwarz, Marvin I; Flaherty, Kevin R; Kazerooni, Ella A; van Beek, Edwin J R; Lynch, David A
2017-10-01
Purpose To evaluate associations between pulmonary function and both quantitative analysis and visual assessment of thin-section computed tomography (CT) images at baseline and at 15-month follow-up in subjects with idiopathic pulmonary fibrosis (IPF). Materials and Methods This retrospective analysis of preexisting anonymized data, collected prospectively between 2007 and 2013 in a HIPAA-compliant study, was exempt from additional institutional review board approval. The extent of lung fibrosis at baseline inspiratory chest CT in 280 subjects enrolled in the IPF Network was evaluated. Visual analysis was performed by using a semiquantitative scoring system. Computer-based quantitative analysis included CT histogram-based measurements and a data-driven textural analysis (DTA). Follow-up CT images in 72 of these subjects were also analyzed. Univariate comparisons were performed by using Spearman rank correlation. Multivariate and longitudinal analyses were performed by using a linear mixed model approach, in which models were compared by using asymptotic χ 2 tests. Results At baseline, all CT-derived measures showed moderate significant correlation (P < .001) with pulmonary function. At follow-up CT, changes in DTA scores showed significant correlation with changes in both forced vital capacity percentage predicted (ρ = -0.41, P < .001) and diffusing capacity for carbon monoxide percentage predicted (ρ = -0.40, P < .001). Asymptotic χ 2 tests showed that inclusion of DTA score significantly improved fit of both baseline and longitudinal linear mixed models in the prediction of pulmonary function (P < .001 for both). Conclusion When compared with semiquantitative visual assessment and CT histogram-based measurements, DTA score provides additional information that can be used to predict diminished function. Automatic quantification of lung fibrosis at CT yields an index of severity that correlates with visual assessment and functional change in subjects with IPF. © RSNA, 2017.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shujia; Duffy, Daniel; Clune, Thomas
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less
A DNA network as an information processing system.
Santini, Cristina Costa; Bath, Jonathan; Turberfield, Andrew J; Tyrrell, Andy M
2012-01-01
Biomolecular systems that can process information are sought for computational applications, because of their potential for parallelism and miniaturization and because their biocompatibility also makes them suitable for future biomedical applications. DNA has been used to design machines, motors, finite automata, logic gates, reaction networks and logic programs, amongst many other structures and dynamic behaviours. Here we design and program a synthetic DNA network to implement computational paradigms abstracted from cellular regulatory networks. These show information processing properties that are desirable in artificial, engineered molecular systems, including robustness of the output in relation to different sources of variation. We show the results of numerical simulations of the dynamic behaviour of the network and preliminary experimental analysis of its main components.
Regional ionospheric model for improvement of navigation position with EGNOS
NASA Astrophysics Data System (ADS)
Swiatek, Anna; Tomasik, Lukasz; Jaworski, Leszek
The problem of insufficient accuracy of EGNOS correction for the territory of Poland, located at the edge of EGNOS range is well known. The EEI PECS project (EGNOS EUPOS Integration) assumed improving the EGNOS correction by using the GPS observations from Polish ASG-EUPOS stations. A ionospheric delay parameter is a part of EGNOS correction. The comparative analysis of TEC values obtained from EGNOS and regional permanent GNSS stations showed the systematic shift. The TEC from EGNOS correction is underestimated related to computed regional TEC value. The new-‘improved’ corrections computed based on regional model were substituted for the EGNOS correction for suitable message. Dynamic measurements managed using the Mobile GPS Laboratory (MGL), showed the improvement of navigation position with TEC regional model.
Readability and writing style analysis of selected allied health professional journals.
Hedl, J J; Glazer-Waldman, H R; Parker, H J; Hopkins, K M
1991-01-01
Using US Department of Defense text sampling procedures, nine allied health journals were analyzed for readability and selected writing style indices via Right Writer, a commercial software program. Two indices of readability were computed for each journal as were several indices of writing style. The computed readability ranged from 13.0 to 15.4, depending upon the journal in question. Two journals showed the highest levels of readability (15.4) compared to the other seven journals. The writing style analyses indicated generally normal ranges for the descriptive and jargon indices, but seven journals showed below recommended strength indices. Sentence structure analyses indicated a need to reduce sentence structure complexity. Implications for journal editors and authors are discussed.
Computers as an Instrument for Data Analysis. Technical Report No. 11.
ERIC Educational Resources Information Center
Muller, Mervin E.
A review of statistical data analysis involving computers as a multi-dimensional problem provides the perspective for consideration of the use of computers in statistical analysis and the problems associated with large data files. An overall description of STATJOB, a particular system for doing statistical data analysis on a digital computer,…
A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS
A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...
Content Structure as a Design Strategy Variable in Concept Acquisition.
ERIC Educational Resources Information Center
Tennyson, Robert D.; Tennyson, Carol L.
Three methods of sequencing coordinate concepts (simultaneous, collective, and successive) were investigated with a Bayesian, computer-based, adaptive control system. The data analysis showed that when coordinate concepts are taught simultaneously (contextually similar concepts presented at the same time), student performance is superior to either…
Factors that Influence the Success of Male and Female Computer Programming Students in College
NASA Astrophysics Data System (ADS)
Clinkenbeard, Drew A.
As the demand for a technologically skilled work force grows, experience and skill in computer science have become increasingly valuable for college students. However, the number of students graduating with computer science degrees is not growing proportional to this need. Traditionally several groups are underrepresented in this field, notably women and students of color. This study investigated elements of computer science education that influence academic achievement in beginning computer programming courses. The goal of the study was to identify elements that increase success in computer programming courses. A 38-item questionnaire was developed and administered during the Spring 2016 semester at California State University Fullerton (CSUF). CSUF is an urban public university comprised of about 40,000 students. Data were collected from three beginning programming classes offered at CSUF. In total 411 questionnaires were collected resulting in a response rate of 58.63%. Data for the study were grouped into three broad categories of variables. These included academic and background variables; affective variables; and peer, mentor, and role-model variables. A conceptual model was developed to investigate how these variables might predict final course grade. Data were analyzed using statistical techniques such as linear regression, factor analysis, and path analysis. Ultimately this study found that peer interactions, comfort with computers, computer self-efficacy, self-concept, and perception of achievement were the best predictors of final course grade. In addition, the analyses showed that male students exhibited higher levels of computer self-efficacy and self-concept compared to female students, even when they achieved comparable course grades. Implications and explanations of these findings are explored, and potential policy changes are offered.
Chen, Mei-Yen; Liou, Yiing-Mei; Wu, Jen-Yee
2008-03-01
Television and computers provide significant benefits for learning about the world. Some studies have linked excessive television (TV) watching or computer game playing to disadvantage of health status or some unhealthy behavior among adolescents. However, the relationships between watching TV/playing computer games and adolescents adopting health promoting behavior were limited. This study aimed to discover the relationship between time spent on watching TV and on leisure use of computers and adolescents' health promoting behavior, and associated factors. This paper used secondary data analysis from part of a health promotion project in Taoyuan County, Taiwan. A cross-sectional design was used and purposive sampling was conducted among adolescents in the original project. A total of 660 participants answered the questions appropriately for this work between January and June 2004. Findings showed the mean age of the respondents was 15.0 +/- 1.7 years. The mean numbers of TV watching hours were 2.28 and 4.07 on weekdays and weekends respectively. The mean hours of leisure (non-academic) computer use were 1.64 and 3.38 on weekdays and weekends respectively. Results indicated that adolescents spent significant time watching TV and using the computer, which was negatively associated with adopting health-promoting behaviors such as life appreciation, health responsibility, social support and exercise behavior. Moreover, being boys, being overweight, living in a rural area, and being middle-school students were significantly associated with spending long periods watching TV and using the computer. Therefore, primary health care providers should record the TV and non-academic computer time of youths when conducting health promotion programs, and educate parents on how to become good and healthy electronic media users.
A distributed system for fast alignment of next-generation sequencing data.
Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D
2010-12-01
We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.
Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud
Cianfrocco, Michael A; Leschziner, Andres E
2015-01-01
The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available ‘off-the-shelf’ computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16–480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM. DOI: http://dx.doi.org/10.7554/eLife.06664.001 PMID:25955969
NASA Astrophysics Data System (ADS)
Galmed, A. H.; du Plessis, A.; le Roux, S. G.; Hartnick, E.; Von Bergmann, H.; Maaza, M.
2018-01-01
Laboratory X-ray computed tomography is an emerging technology for the 3D characterization and dimensional analysis of many types of materials. In this work we demonstrate the usefulness of this characterization method for the full three dimensional analysis of laser ablation craters, in the context of a laser induced breakdown spectroscopy setup. Laser induced breakdown spectroscopy relies on laser ablation for sampling the material of interest. We demonstrate here qualitatively (in images) and quantitatively (in terms of crater cone angles, depths, diameters and volume) laser ablation crater analysis in 3D for metal (aluminum) and rock (false gold ore). We show the effect of a Gaussian beam profile on the resulting crater geometry, as well as the first visual evidence of undercutting in the rock sample, most likely due to ejection of relatively large grains. The method holds promise for optimization of laser ablation setups especially for laser induced breakdown spectroscopy.
NASA Astrophysics Data System (ADS)
Donini, A.; Martin, S. M.; Bastiaans, R. J. M.; van Oijen, J. A.; de Goey, L. P. H.
2013-10-01
In the present paper a computational analysis of a high pressure confined premixed turbulent methane/air jet flames is presented. In this scope, chemistry is reduced by the use of the Flamelet Generated Manifold method [1] and the fluid flow is modeled in an LES and RANS context. The reaction evolution is described by the reaction progress variable, the heat loss is described by the enthalpy and the turbulence effect on the reaction is represented by the progress variable variance. The interaction between chemistry and turbulence is considered through a presumed probability density function (PDF) approach. The use of FGM as a combustion model shows that combustion features at gas turbine conditions can be satisfactorily reproduced with a reasonable computational effort. Furthermore, the present analysis indicates that the physical and chemical processes controlling carbon monoxide (CO) emissions can be captured only by means of unsteady simulations.
A stochastic multicriteria model for evidence-based decision making in drug benefit-risk analysis.
Tervonen, Tommi; van Valkenhoef, Gert; Buskens, Erik; Hillege, Hans L; Postmus, Douwe
2011-05-30
Drug benefit-risk (BR) analysis is based on firm clinical evidence regarding various safety and efficacy outcomes. In this paper, we propose a new and more formal approach for constructing a supporting multi-criteria model that fully takes into account the evidence on efficacy and adverse drug reactions. Our approach is based on the stochastic multi-criteria acceptability analysis methodology, which allows us to compute the typical value judgments that support a decision, to quantify decision uncertainty, and to compute a comprehensive BR profile. We construct a multi-criteria model for the therapeutic group of second-generation antidepressants. We assess fluoxetine and venlafaxine together with placebo according to incidence of treatment response and three common adverse drug reactions by using data from a published study. Our model shows that there are clear trade-offs among the treatment alternatives. Copyright © 2011 John Wiley & Sons, Ltd.
Comparative Investigation of Normal Modes and Molecular Dynamics of Hepatitis C NS5B Protein
NASA Astrophysics Data System (ADS)
Asafi, M. S.; Yildirim, A.; Tekpinar, M.
2016-04-01
Understanding dynamics of proteins has many practical implications in terms of finding a cure for many protein related diseases. Normal mode analysis and molecular dynamics methods are widely used physics-based computational methods for investigating dynamics of proteins. In this work, we studied dynamics of Hepatitis C NS5B protein with molecular dynamics and normal mode analysis. Principal components obtained from a 100 nanoseconds molecular dynamics simulation show good overlaps with normal modes calculated with a coarse-grained elastic network model. Coarse-grained normal mode analysis takes at least an order of magnitude shorter time. Encouraged by this good overlaps and short computation times, we analyzed further low frequency normal modes of Hepatitis C NS5B. Motion directions and average spatial fluctuations have been analyzed in detail. Finally, biological implications of these motions in drug design efforts against Hepatitis C infections have been elaborated.
Zhu, Xinjie; Zhang, Qiang; Ho, Eric Dun; Yu, Ken Hung-On; Liu, Chris; Huang, Tim H; Cheng, Alfred Sze-Lok; Kao, Ben; Lo, Eric; Yip, Kevin Y
2017-09-22
A genomic signal track is a set of genomic intervals associated with values of various types, such as measurements from high-throughput experiments. Analysis of signal tracks requires complex computational methods, which often make the analysts focus too much on the detailed computational steps rather than on their biological questions. Here we propose Signal Track Query Language (STQL) for simple analysis of signal tracks. It is a Structured Query Language (SQL)-like declarative language, which means one only specifies what computations need to be done but not how these computations are to be carried out. STQL provides a rich set of constructs for manipulating genomic intervals and their values. To run STQL queries, we have developed the Signal Track Analytical Research Tool (START, http://yiplab.cse.cuhk.edu.hk/start/ ), a system that includes a Web-based user interface and a back-end execution system. The user interface helps users select data from our database of around 10,000 commonly-used public signal tracks, manage their own tracks, and construct, store and share STQL queries. The back-end system automatically translates STQL queries into optimized low-level programs and runs them on a computer cluster in parallel. We use STQL to perform 14 representative analytical tasks. By repeating these analyses using bedtools, Galaxy and custom Python scripts, we show that the STQL solution is usually the simplest, and the parallel execution achieves significant speed-up with large data files. Finally, we describe how a biologist with minimal formal training in computer programming self-learned STQL to analyze DNA methylation data we produced from 60 pairs of hepatocellular carcinoma (HCC) samples. Overall, STQL and START provide a generic way for analyzing a large number of genomic signal tracks in parallel easily.
[Cost analysis for navigation in knee endoprosthetics].
Cerha, O; Kirschner, S; Günther, K-P; Lützner, J
2009-12-01
Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark
2007-12-01
To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.
ERIC Educational Resources Information Center
Kerins, John; Ramsay, Allan
2012-01-01
This paper reports on the development of a prototype tool which shows how learners can be helped to reflect upon the accuracy of their writing. Analysis of samples of freely written texts by intermediate and advanced learners of English as a foreign language (EFL) showed evidence of weakness in the use of tense and aspect. Computational discourse…
Aeromechanics Analysis of a Boundary Layer Ingesting Fan
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Reddy, T. S. R.; Herrick, Gregory P.; Shabbir, Aamir; Florea, Razvan V.
2013-01-01
Boundary layer ingesting propulsion systems have the potential to significantly reduce fuel burn but these systems must overcome the challe nges related to aeromechanics-fan flutter stability and forced response dynamic stresses. High-fidelity computational analysis of the fan a eromechanics is integral to the ongoing effort to design a boundary layer ingesting inlet and fan for fabrication and wind-tunnel test. A t hree-dimensional, time-accurate, Reynolds-averaged Navier Stokes computational fluid dynamics code is used to study aerothermodynamic and a eromechanical behavior of the fan in response to both clean and distorted inflows. The computational aeromechanics analyses performed in th is study show an intermediate design iteration of the fan to be flutter-free at the design conditions analyzed with both clean and distorte d in-flows. Dynamic stresses from forced response have been calculated for the design rotational speed. Additional work is ongoing to expan d the analyses to off-design conditions, and for on-resonance conditions.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193
NASA Astrophysics Data System (ADS)
Herzberg, C.; Asimow, P. D.
2015-02-01
An upgrade of the PRIMELT algorithm for calculating primary magma composition is given together with its implementation in PRIMELT3 MEGA.xlsm software. It supersedes PRIMELT2.xls in correcting minor mistakes in melt fraction and computed Ni content of olivine, it identifies residuum mineralogy, and it provides a thorough analysis of uncertainties in mantle potential temperature and olivine liquidus temperature. The uncertainty analysis was made tractable by the computation of olivine liquidus temperatures as functions of pressure and partial melt MgO content between the liquidus and solidus. We present a computed anhydrous peridotite solidus in T-P space using relations amongst MgO, T and P along the solidus; it compares well with experiments on the solidus. Results of the application of PRIMELT3 to a wide range of basalts shows that the mantle sources of ocean islands and large igneous provinces were hotter than oceanic spreading centers, consistent with earlier studies and expectations of the mantle plume model.
Nonlinear dimensionality reduction of electroencephalogram (EEG) for Brain Computer interfaces.
Teli, Mohammad Nayeem; Anderson, Charles
2009-01-01
Patterns in electroencephalogram (EEG) signals are analyzed for a Brain Computer Interface (BCI). An important aspect of this analysis is the work on transformations of high dimensional EEG data to low dimensional spaces in which we can classify the data according to mental tasks being performed. In this research we investigate how a Neural Network (NN) in an auto-encoder with bottleneck configuration can find such a transformation. We implemented two approximate second-order methods to optimize the weights of these networks, because the more common first-order methods are very slow to converge for networks like these with more than three layers of computational units. The resulting non-linear projections of time embedded EEG signals show interesting separations that are related to tasks. The bottleneck networks do indeed discover nonlinear transformations to low-dimensional spaces that capture much of the information present in EEG signals. However, the resulting low-dimensional representations do not improve classification rates beyond what is possible using Quadratic Discriminant Analysis (QDA) on the original time-lagged EEG.
NASA Astrophysics Data System (ADS)
Fei, Cheng-Wei; Bai, Guang-Chen
2014-12-01
To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.
Application of Dynamic Analysis in Semi-Analytical Finite Element Method.
Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus
2017-08-30
Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.
Computer-aided design analysis of 57-mm, angular-contact, cryogenic turbopump bearings
NASA Technical Reports Server (NTRS)
Armstrong, Elizabeth S.; Coe, Harold H.
1988-01-01
The Space Shuttle main engine high-pressure oxygen turbopumps have not experienced the sevice life required of them. This insufficiency has been due in part to the shortened life of the bearings. To improve the life of the existing turbopump bearings, an effort is under way to investigate bearing modifications that could be retrofitted into the present bearing cavity. Several bearing parameters were optimized using the computer program SHABERTH, which performs a thermomechanical simulation of a load support system. The computer analysis showed that improved bearing performance is feasible if low friction coefficients can be attained. Bearing geometries were optimized considering heat generation, equilibrium temperatures, and relative life. Thermal gradients through the bearings were found to be lower with liquid lubrication than with solid film lubrication, and a liquid oxygen coolant flowrate of approximately 4.0 kg/s was found to be optimal. This paper describes the analytical modeling used to determine these feasible modifications to improve bearing performance.
A ground track control algorithm for the Topographic Mapping Laser Altimeter (TMLA)
NASA Technical Reports Server (NTRS)
Blaes, V.; Mcintosh, R.; Roszman, L.; Cooley, J.
1993-01-01
The results of an analysis of an algorithm that will provide autonomous onboard orbit control using orbits determined with Global Positioning System (GPS) data. The algorithm uses the GPS data to (1) compute the ground track error relative to a fixed longitude grid, and (2) determine the altitude adjustment required to correct the longitude error. A program was written on a personal computer (PC) to test the concept for numerous altitudes and values of solar flux using a simplified orbit model including only the J sub 2 zonal harmonic and simple orbit decay computations. The algorithm was then implemented in a precision orbit propagation program having a full range of perturbations. The analysis showed that, even with all perturbations (including actual time histories of solar flux variation), the algorithm could effectively control the spacecraft ground track and yield more than 99 percent Earth coverage in the time required to complete one coverage cycle on the fixed grid (220 to 230 days depending on altitude and overlap allowance).
Computational modeling of latent-heat-storage in PCM modified interior plaster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fořt, Jan; Maděra, Jiří; Trník, Anton
2016-06-08
The latent heat storage systems represent a promising way for decrease of buildings energy consumption with respect to the sustainable development principles of building industry. The presented paper is focused on the evaluation of the effect of PCM incorporation on thermal performance of cement-lime plasters. For basic characterization of the developed materials, matrix density, bulk density, and total open porosity are measured. Thermal conductivity is accessed by transient impulse method. DSC analysis is used for the identification of phase change temperature during the heating and cooling process. Using DSC data, the temperature dependent specific heat capacity is calculated. On themore » basis of the experiments performed, the supposed improvement of the energy efficiency of characteristic building envelope system where the designed plasters are likely to be used is evaluated by a computational analysis. Obtained experimental and computational results show a potential of PCM modified plasters for improvement of thermal stability of buildings and moderation of interior climate.« less
Computational Analysis of the G-III Laminar Flow Glove
NASA Technical Reports Server (NTRS)
Malik, Mujeeb R.; Liao, Wei; Lee-Rausch, Elizabeth M.; Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan
2011-01-01
Under NASA's Environmentally Responsible Aviation Project, flight experiments are planned with the primary objective of demonstrating the Discrete Roughness Elements (DRE) technology for passive laminar flow control at chord Reynolds numbers relevant to transport aircraft. In this paper, we present a preliminary computational assessment of the Gulfstream-III (G-III) aircraft wing-glove designed to attain natural laminar flow for the leading-edge sweep angle of 34.6deg. Analysis for a flight Mach number of 0.75 shows that it should be possible to achieve natural laminar flow for twice the transition Reynolds number ever achieved at this sweep angle. However, the wing-glove needs to be redesigned to effectively demonstrate passive laminar flow control using DREs. As a by-product of the computational assessment, effect of surface curvature on stationary crossflow disturbances is found to be strongly stabilizing for the current design, and it is suggested that convex surface curvature could be used as a control parameter for natural laminar flow design, provided transition occurs via stationary crossflow disturbances.
Assessment of traffic noise levels in urban areas using different soft computing techniques.
Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D
2016-10-01
Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.
NASA Technical Reports Server (NTRS)
Ha Minh, H.; Viegas, J. R.; Rubesin, M. W.; Spalart, P.; Vandromme, D. D.
1989-01-01
The turbulent boundary layer under a freestream whose velocity varies sinusoidally in time around a zero mean is computed using two second order turbulence closure models. The time or phase dependent behavior of the Reynolds stresses are analyzed and results are compared to those of a previous SPALART-BALDWIN direct simulation. Comparisons show that the second order modeling is quite satisfactory for almost all phase angles, except in the relaminarization period where the computations lead to a relatively high wall shear stress.
Reliability analysis of redundant systems. [a method to compute transition probabilities
NASA Technical Reports Server (NTRS)
Yeh, H. Y.
1974-01-01
A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.
Distributed sensor networks: a cellular nonlinear network perspective.
Haenggi, Martin
2003-12-01
Large-scale networks of integrated wireless sensors become increasingly tractable. Advances in hardware technology and engineering design have led to dramatic reductions in size, power consumption, and cost for digital circuitry, and wireless communications. Networking, self-organization, and distributed operation are crucial ingredients to harness the sensing, computing, and computational capabilities of the nodes into a complete system. This article shows that those networks can be considered as cellular nonlinear networks (CNNs), and that their analysis and design may greatly benefit from the rich theoretical results available for CNNs.
Analysis of high-order SNP barcodes in mitochondrial D-loop for chronic dialysis susceptibility.
Yang, Cheng-Hong; Lin, Yu-Da; Chuang, Li-Yeh; Chang, Hsueh-Wei
2016-10-01
Positively identifying disease-associated single nucleotide polymorphism (SNP) markers in genome-wide studies entails the complex association analysis of a huge number of SNPs. Such large numbers of SNP barcode (SNP/genotype combinations) continue to pose serious computational challenges, especially for high-dimensional data. We propose a novel exploiting SNP barcode method based on differential evolution, termed IDE (improved differential evolution). IDE uses a "top combination strategy" to improve the ability of differential evolution to explore high-order SNP barcodes in high-dimensional data. We simulate disease data and use real chronic dialysis data to test four global optimization algorithms. In 48 simulated disease models, we show that IDE outperforms existing global optimization algorithms in terms of exploring ability and power to detect the specific SNP/genotype combinations with a maximum difference between cases and controls. In real data, we show that IDE can be used to evaluate the relative effects of each individual SNP on disease susceptibility. IDE generated significant SNP barcode with less computational complexity than the other algorithms, making IDE ideally suited for analysis of high-order SNP barcodes. Copyright © 2016 Elsevier Inc. All rights reserved.
Simulation of Transcritical CO2 Refrigeration System with Booster Hot Gas Bypass in Tropical Climate
NASA Astrophysics Data System (ADS)
Santosa, I. D. M. C.; Sudirman; Waisnawa, IGNS; Sunu, PW; Temaja, IW
2018-01-01
A Simulation computer becomes significant important for performance analysis since there is high cost and time allocation to build an experimental rig, especially for CO2 refrigeration system. Besides, to modify the rig also need additional cos and time. One of computer program simulation that is very eligible to refrigeration system is Engineering Equation System (EES). In term of CO2 refrigeration system, environmental issues becomes priority on the refrigeration system development since the Carbon dioxide (CO2) is natural and clean refrigerant. This study aims is to analysis the EES simulation effectiveness to perform CO2 transcritical refrigeration system with booster hot gas bypass in high outdoor temperature. The research was carried out by theoretical study and numerical analysis of the refrigeration system using the EES program. Data input and simulation validation were obtained from experimental and secondary data. The result showed that the coefficient of performance (COP) decreased gradually with the outdoor temperature variation increasing. The results show the program can calculate the performance of the refrigeration system with quick running time and accurate. So, it will be significant important for the preliminary reference to improve the CO2 refrigeration system design for the hot climate temperature.
Neptune Aerocapture Systems Analysis
NASA Technical Reports Server (NTRS)
Lockwood, Mary Kae
2004-01-01
A Neptune Aerocapture Systems Analysis is completed to determine the feasibility, benefit and risk of an aeroshell aerocapture system for Neptune and to identify technology gaps and technology performance goals. The high fidelity systems analysis is completed by a five center NASA team and includes the following disciplines and analyses: science; mission design; aeroshell configuration screening and definition; interplanetary navigation analyses; atmosphere modeling; computational fluid dynamics for aerodynamic performance and database definition; initial stability analyses; guidance development; atmospheric flight simulation; computational fluid dynamics and radiation analyses for aeroheating environment definition; thermal protection system design, concepts and sizing; mass properties; structures; spacecraft design and packaging; and mass sensitivities. Results show that aerocapture can deliver 1.4 times more mass to Neptune orbit than an all-propulsive system for the same launch vehicle. In addition aerocapture results in a 3-4 year reduction in trip time compared to all-propulsive systems. Aerocapture is feasible and performance is adequate for the Neptune aerocapture mission. Monte Carlo simulation results show 100% successful capture for all cases including conservative assumptions on atmosphere and navigation. Enabling technologies for this mission include TPS manufacturing; and aerothermodynamic methods and validation for determining coupled 3-D convection, radiation and ablation aeroheating rates and loads, and the effects on surface recession.
Petkovic, Sonja; Badelt, Stefan; Flamm, Christoph; Delcea, Mihaela
2015-01-01
Reversible chemistry allowing for assembly and disassembly of molecular entities is important for biological self-organization. Thus, ribozymes that support both cleavage and formation of phosphodiester bonds may have contributed to the emergence of functional diversity and increasing complexity of regulatory RNAs in early life. We have previously engineered a variant of the hairpin ribozyme that shows how ribozymes may have circularized or extended their own length by forming concatemers. Using the Vienna RNA package, we now optimized this hairpin ribozyme variant and selected four different RNA sequences that were expected to circularize more efficiently or form longer concatemers upon transcription. (Two-dimensional) PAGE analysis confirms that (i) all four selected ribozymes are catalytically active and (ii) high yields of cyclic species are obtained. AFM imaging in combination with RNA structure prediction enabled us to calculate the distributions of monomers and self-concatenated dimers and trimers. Our results show that computationally optimized molecules do form reasonable amounts of trimers, which has not been observed for the original system so far, and we demonstrate that the combination of theoretical prediction, biochemical and physical analysis is a promising approach toward accurate prediction of ribozyme behavior and design of ribozymes with predefined functions. PMID:25999318
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
NASA Astrophysics Data System (ADS)
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Affective assessment of computer users based on processing the pupil diameter signal.
Ren, Peng; Barreto, Armando; Gao, Ying; Adjouadi, Malek
2011-01-01
Detecting affective changes of computer users is a current challenge in human-computer interaction which is being addressed with the help of biomedical engineering concepts. This article presents a new approach to recognize the affective state ("relaxation" vs. "stress") of a computer user from analysis of his/her pupil diameter variations caused by sympathetic activation. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features are extracted from the preprocessed PD signal for the affective state classification. Finally, a random tree classifier is implemented, achieving an accuracy of 86.78%. In these experiments the Eye Blink Frequency (EBF), is also recorded and used for affective state classification, but the results show that the PD is a more promising physiological signal for affective assessment.
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
NASA Astrophysics Data System (ADS)
La Barbera, Selina; Vincent, Adrien F.; Vuillaume, Dominique; Querlioz, Damien; Alibart, Fabien
2016-12-01
Bio-inspired computing represents today a major challenge at different levels ranging from material science for the design of innovative devices and circuits to computer science for the understanding of the key features required for processing of natural data. In this paper, we propose a detail analysis of resistive switching dynamics in electrochemical metallization cells for synaptic plasticity implementation. We show how filament stability associated to joule effect during switching can be used to emulate key synaptic features such as short term to long term plasticity transition and spike timing dependent plasticity. Furthermore, an interplay between these different synaptic features is demonstrated for object motion detection in a spike-based neuromorphic circuit. System level simulation presents robust learning and promising synaptic operation paving the way to complex bio-inspired computing systems composed of innovative memory devices.
Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python
Laura, Jason R.; Rey, Sergio J.
2017-01-01
Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
Valkonen, Mira; Ruusuvuori, Pekka; Kartasalo, Kimmo; Nykter, Matti; Visakorpi, Tapio; Latonen, Leena
2017-01-01
Cancer involves histological changes in tissue, which is of primary importance in pathological diagnosis and research. Automated histological analysis requires ability to computationally separate pathological alterations from normal tissue with all its variables. On the other hand, understanding connections between genetic alterations and histological attributes requires development of enhanced analysis methods suitable also for small sample sizes. Here, we set out to develop computational methods for early detection and distinction of prostate cancer-related pathological alterations. We use analysis of features from HE stained histological images of normal mouse prostate epithelium, distinguishing the descriptors for variability between ventral, lateral, and dorsal lobes. In addition, we use two common prostate cancer models, Hi-Myc and Pten+/− mice, to build a feature-based machine learning model separating the early pathological lesions provoked by these genetic alterations. This work offers a set of computational methods for separation of early neoplastic lesions in the prostates of model mice, and provides proof-of-principle for linking specific tumor genotypes to quantitative histological characteristics. The results obtained show that separation between different spatial locations within the organ, as well as classification between histologies linked to different genetic backgrounds, can be performed with very high specificity and sensitivity. PMID:28317907
Principal Component Analysis in the Spectral Analysis of the Dynamic Laser Speckle Patterns
NASA Astrophysics Data System (ADS)
Ribeiro, K. M.; Braga, R. A., Jr.; Horgan, G. W.; Ferreira, D. D.; Safadi, T.
2014-02-01
Dynamic laser speckle is a phenomenon that interprets an optical patterns formed by illuminating a surface under changes with coherent light. Therefore, the dynamic change of the speckle patterns caused by biological material is known as biospeckle. Usually, these patterns of optical interference evolving in time are analyzed by graphical or numerical methods, and the analysis in frequency domain has also been an option, however involving large computational requirements which demands new approaches to filter the images in time. Principal component analysis (PCA) works with the statistical decorrelation of data and it can be used as a data filtering. In this context, the present work evaluated the PCA technique to filter in time the data from the biospeckle images aiming the reduction of time computer consuming and improving the robustness of the filtering. It was used 64 images of biospeckle in time observed in a maize seed. The images were arranged in a data matrix and statistically uncorrelated by PCA technique, and the reconstructed signals were analyzed using the routine graphical and numerical methods to analyze the biospeckle. Results showed the potential of the PCA tool in filtering the dynamic laser speckle data, with the definition of markers of principal components related to the biological phenomena and with the advantage of fast computational processing.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav
2017-10-01
In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.
A mathematical model of an active control landing gear for load control during impact and roll-out
NASA Technical Reports Server (NTRS)
Mcgehee, J. R.; Carden, H. D.
1976-01-01
A mathematical model of an active control landing gear (ACOLAG) was developed and programmed for operation on a digital computer. The mathematical model includes theoretical subsonic aerodynamics; first-mode wing bending and torsional characteristics; oleo-pneumatic shock strut with fit and binding friction; closed-loop, series-hydraulic control; empirical tire force-deflection characteristics; antiskid braking; and sinusoidal or random runway roughness. The mathematical model was used to compute the loads and motions for a simulated vertical drop test and a simulated landing impact of a conventional (passive) main landing gear designed for a 2268-kg (5000-lbm) class airplane. Computations were also made for a simply modified version of the passive gear including a series-hydraulic active control system. Comparison of computed results for the passive gear with experimental data shows that the active control landing gear analysis is valid for predicting the loads and motions of an airplane during a symmetrical landing. Computed results for the series-hydraulic active control in conjunction with the simply modified passive gear show that 20- to 30-percent reductions in wing force, relative to those occurring with the modified passive gear, can be obtained during the impact phase of the landing. These reductions in wing force could result in substantial increases in fatigue life of the structure.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Navier-Stokes analysis of an oxidizer turbine blade with tip clearance
NASA Technical Reports Server (NTRS)
Gibeling, Howard J.; Sabnis, Jayant S.
1992-01-01
The Gas Generator Oxidizer Turbine (GGOT) Blade is being analyzed by various investigators under the NASA MSFC sponsored Turbine Stage Technology Team design effort. The present work concentrates on the tip clearance region flow and associated losses; however, flow details for the passage region are also obtained in the simulations. The present calculations simulate the rotor blade row in a rotating reference frame with the appropriate coriolis and centrifugal acceleration terms included in the momentum equation. The upstream computational boundary is located about one axial chord from the blade leading edge. The boundary conditions at this location were determined by using a Euler analysis without the vanes to obtain approximately the same flow profiles at the rotor as were obtained with the Euler stage analysis including the vanes. Inflow boundary layer profiles are then constructed assuming the skin friction coefficient at both the hub and the casing. The downstream computational boundary is located about one axial chord from the blade trailing edge, and the circumferentially averaged static pressure at this location was also obtained from the Euler analysis. Results were obtained for the 3-D baseline GGOT geometry at the full scale design Reynolds number. Details of the clearance region flow behavior and blade pressure distributions were computed. The spanwise variation in blade loading distributions are shown, and circumferentially averaged spanwise distributions of total pressure, total temperature, Mach number, and flow angle are shown at several axial stations. The spanwise variation of relative total pressure loss shows a region of high loss in the region near the casing. Particle traces in the near tip region show vortical behavior of the fluid which passes through the clearance region and exits at the downstream edge of the gap.
Conjugate Analysis of Two-Dimensional Ablation and Pyrolysis in Rocket Nozzles
NASA Astrophysics Data System (ADS)
Cross, Peter G.
The development of a methodology and computational framework for performing conjugate analyses of transient, two-dimensional ablation of pyrolyzing materials in rocket nozzle applications is presented. This new engineering methodology comprehensively incorporates fluid-thermal-chemical processes relevant to nozzles and other high temperature components, making it possible, for the first time, to rigorously capture the strong interactions and interdependencies that exist between the reacting flowfield and the ablating material. By basing thermal protection system engineering more firmly on first principles, improved analysis accuracy can be achieved. The computational framework developed in this work couples a multi-species, reacting flow solver to a two-dimensional material response solver. New capabilities are added to the flow solver in order to be able to model unique aspects of the flow through solid rocket nozzles. The material response solver is also enhanced with new features that enable full modeling of pyrolyzing, anisotropic materials with a true two-dimensional treatment of the porous flow of the pyrolysis gases. Verification and validation studies demonstrating correct implementation of these new models in the flow and material response solvers are also presented. Five different treatments of the surface energy balance at the ablating wall, with increasing levels of fidelity, are investigated. The Integrated Equilibrium Surface Chemistry (IESC) treatment computes the surface energy balance and recession rate directly from the diffusive fluxes at the ablating wall, without making transport coefficient or unity Lewis number assumptions, or requiring pre-computed surface thermochemistry tables. This method provides the highest level of fidelity, and can inherently account for the effects that recession, wall temperature, blowing, and the presence of ablation product species in the boundary layer have on the flowfield and ablation response. Multiple decoupled and conjugate ablation analysis studies for the HIPPO nozzle test case are presented. Results from decoupled simulations show sensitivity to the wall temperature profile used within the flow solver, indicating the need for conjugate analyses. Conjugate simulations show that the thermal response of the nozzle is relatively insensitive to the choice of the surface energy balance treatment. However, the surface energy balance treatment is found to strongly affect the surface recession predictions. Out of all the methods considered, the IESC treatment produces surface recession predictions with the best agreement to experimental data. These results show that the increased fidelity provided by the proposed conjugate ablation modeling methodology produces improved analysis accuracy, as desired.
Satellite Imagery Analysis for Automated Global Food Security Forecasting
NASA Astrophysics Data System (ADS)
Moody, D.; Brumby, S. P.; Chartrand, R.; Keisler, R.; Mathis, M.; Beneke, C. M.; Nicholaeff, D.; Skillman, S.; Warren, M. S.; Poehnelt, J.
2017-12-01
The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. Cloud computing and storage, combined with recent advances in machine learning, are enabling understanding of the world at a scale and at a level of detail never before feasible. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and that can scale with the high-rate and dimensionality of imagery being collected. We focus on the problem of monitoring food crop productivity across the Middle East and North Africa, and show how an analysis-ready, multi-sensor data platform enables quick prototyping of satellite imagery analysis algorithms, from land use/land cover classification and natural resource mapping, to yearly and monthly vegetative health change trends at the structural field level.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.
2015-07-01
The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.
Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations
NASA Technical Reports Server (NTRS)
Chanchio, Kasidit; Sun, Xian-He
1996-01-01
This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.
Annotation analysis for testing drug safety signals using unstructured clinical notes
2012-01-01
Background The electronic surveillance for adverse drug events is largely based upon the analysis of coded data from reporting systems. Yet, the vast majority of electronic health data lies embedded within the free text of clinical notes and is not gathered into centralized repositories. With the increasing access to large volumes of electronic medical data—in particular the clinical notes—it may be possible to computationally encode and to test drug safety signals in an active manner. Results We describe the application of simple annotation tools on clinical text and the mining of the resulting annotations to compute the risk of getting a myocardial infarction for patients with rheumatoid arthritis that take Vioxx. Our analysis clearly reveals elevated risks for myocardial infarction in rheumatoid arthritis patients taking Vioxx (odds ratio 2.06) before 2005. Conclusions Our results show that it is possible to apply annotation analysis methods for testing hypotheses about drug safety using electronic medical records. PMID:22541596
Yokoyama, Shunichi; Kajiya, Yoriko; Yoshinaga, Takuma; Tani, Atsushi; Hirano, Hirofumi
2014-06-01
In the diagnosis of Alzheimer's disease (AD), discrepancies are often observed between magnetic resonance imaging (MRI) and brain perfusion single-photon emission computed tomography (SPECT) findings. MRI, brain perfusion SPECT, and amyloid positron emission tomography (PET) findings were compared in patients with mild cognitive impairment or early AD to clarify the discrepancies between imaging modalities. Several imaging markers were investigated, including the cortical average standardized uptake value ratio on amyloid PET, the Z-score of a voxel-based specific regional analysis system for AD on MRI, periventricular hyperintensity grade, deep white matter hyperintense signal grade, number of microbleeds, and three indicators of the easy Z-score imaging system for a specific SPECT volume-of-interest analysis. Based on the results of the regional analysis and the three indicators, we classified patients into four groups and then compared the results of amyloid PET, periventricular hyperintensity grade, deep white matter hyperintense signal grade, and the numbers of microbleeds among the groups. The amyloid deposition was the highest in the group that presented typical AD findings on both the regional analysis and the three indicators. The two groups that showed an imaging discrepancy between the regional analysis and the three indicators demonstrated intermediate amyloid deposition findings compared with the typical and atypical groups. The patients who showed hippocampal atrophy on the regional analysis and atypical AD findings using the three indicators were approximately 60% amyloid-negative. The mean periventricular hyperintensity grade was highest in the typical group. Patients showing discrepancies between MRI and SPECT demonstrated intermediate amyloid deposition findings compared with patients who showed typical or atypical findings. Strong white matter signal abnormalities on MRI in patients who presented typical AD findings provided further evidence for the involvement of vascular factors in AD. © 2014 The Authors. Psychogeriatrics © 2014 Japanese Psychogeriatric Society.
Bhatt, Ishan S; Guthrie, O'neil
2017-06-01
Bilateral audiometric notch (BN) at 4000-6000 Hz was identified as a noise-induced hearing loss (NIHL) phenotype for genetic association analysis in college-aged musicians. This study analysed BN in a sample of US youth. Prevalence of the BN within the study sample was determined and logistic-regression analyses were performed to identify audiologic and other demographic factors associated with BN. Computer-simulated "flat" audiograms were used to estimate potential influence of false-positive rates in estimating the prevalence of the BN. 2348 participants (12-19 years) following the inclusion criteria were selected from the National Health and Nutrition Examination Survey data (2005-2010). The prevalence of BN was 16.6%. Almost 55.6% of the participants showed notch in at least one ear. Noise exposure, gender, ethnicity and age showed significant relationship with the BN. Computer simulation revealed that 5.5% of simulated participants with "flat" audiograms showed BN. Association of noise exposure with BN suggests that it is a useful NIHL phenotype for genetic association analyses. However, further research is necessary to reduce false-positive rates in notch identification.
Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.
2016-01-01
In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718
Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D
2016-07-25
In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeung, Yu-Hong; Pothen, Alex; Halappanavar, Mahantesh
We present an augmented matrix approach to update the solution to a linear system of equations when the coefficient matrix is modified by a few elements within a principal submatrix. This problem arises in the dynamic security analysis of a power grid, where operators need to performmore » $N-x$ contingency analysis, i.e., determine the state of the system when up to $x$ links from $N$ fail. Our algorithms augment the coefficient matrix to account for the changes in it, and then compute the solution to the augmented system without refactoring the modified matrix. We provide two algorithms, a direct method, and a hybrid direct-iterative method for solving the augmented system. We also exploit the sparsity of the matrices and vectors to accelerate the overall computation. Our algorithms are compared on three power grids with PARDISO, a parallel direct solver, and CHOLMOD, a direct solver with the ability to modify the Cholesky factors of the coefficient matrix. We show that our augmented algorithms outperform PARDISO (by two orders of magnitude), and CHOLMOD (by a factor of up to 5). Further, our algorithms scale better than CHOLMOD as the number of elements updated increases. The solutions are computed with high accuracy. Our algorithms are capable of computing $N-x$ contingency analysis on a $778K$ bus grid, updating a solution with $x=20$ elements in $$1.6 \\times 10^{-2}$$ seconds on an Intel Xeon processor.« less
Khalil, Wael; EzEldeen, Mostafa; Van De Casteele, Elke; Shaheen, Eman; Sun, Yi; Shahbazian, Maryam; Olszewski, Raphael; Politis, Constantinus; Jacobs, Reinhilde
2016-03-01
Our aim was to determine the accuracy of 3-dimensional reconstructed models of teeth compared with the natural teeth by using 4 different 3-dimensional printers. This in vitro study was carried out using 2 intact, dry adult human mandibles, which were scanned with cone beam computed tomography. Premolars were selected for this study. Dimensional differences between natural teeth and the printed models were evaluated directly by using volumetric differences and indirectly through optical scanning. Analysis of variance, Pearson correlation, and Bland Altman plots were applied for statistical analysis. Volumetric measurements from natural teeth and fabricated models, either by the direct method (the Archimedes principle) or by the indirect method (optical scanning), showed no statistical differences. The mean volume difference ranged between 3.1 mm(3) (0.7%) and 4.4 mm(3) (1.9%) for the direct measurement, and between -1.3 mm(3) (-0.6%) and 11.9 mm(3) (+5.9%) for the optical scan. A surface part comparison analysis showed that 90% of the values revealed a distance deviation within the interval 0 to 0.25 mm. Current results showed a high accuracy of all printed models of teeth compared with natural teeth. This outcome opens perspectives for clinical use of cost-effective 3-dimensional printed teeth for surgical procedures, such as tooth autotransplantation. Copyright © 2016 Elsevier Inc. All rights reserved.
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, J.
1999-01-01
A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.
StrAuto: automation and parallelization of STRUCTURE analysis.
Chhatre, Vikram E; Emerson, Kevin J
2017-03-24
Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .
ProteMiner-SSM: a web server for efficient analysis of similar protein tertiary substructures.
Chang, Darby Tien-Hau; Chen, Chien-Yu; Chung, Wen-Chin; Oyang, Yen-Jen; Juan, Hsueh-Fen; Huang, Hsuan-Cheng
2004-07-01
Analysis of protein-ligand interactions is a fundamental issue in drug design. As the detailed and accurate analysis of protein-ligand interactions involves calculation of binding free energy based on thermodynamics and even quantum mechanics, which is highly expensive in terms of computing time, conformational and structural analysis of proteins and ligands has been widely employed as a screening process in computer-aided drug design. In this paper, a web server called ProteMiner-SSM designed for efficient analysis of similar protein tertiary substructures is presented. In one experiment reported in this paper, the web server has been exploited to obtain some clues about a biochemical hypothesis. The main distinction in the software design of the web server is the filtering process incorporated to expedite the analysis. The filtering process extracts the residues located in the caves of the protein tertiary structure for analysis and operates with O(nlogn) time complexity, where n is the number of residues in the protein. In comparison, the alpha-hull algorithm, which is a widely used algorithm in computer graphics for identifying those instances that are on the contour of a three-dimensional object, features O(n2) time complexity. Experimental results show that the filtering process presented in this paper is able to speed up the analysis by a factor ranging from 3.15 to 9.37 times. The ProteMiner-SSM web server can be found at http://proteminer.csie.ntu.edu.tw/. There is a mirror site at http://p4.sbl.bc.sinica.edu.tw/proteminer/.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
Computer-composite mapping for geologists
van Driel, J.N.
1980-01-01
A computer program for overlaying maps has been tested and evaluated as a means for producing geologic derivative maps. Four maps of the Sugar House Quadrangle, Utah, were combined, using the Multi-Scale Data Analysis and Mapping Program, in a single composite map that shows the relative stability of the land surface during earthquakes. Computer-composite mapping can provide geologists with a powerful analytical tool and a flexible graphic display technique. Digitized map units can be shown singly, grouped with different units from the same map, or combined with units from other source maps to produce composite maps. The mapping program permits the user to assign various values to the map units and to specify symbology for the final map. Because of its flexible storage, easy manipulation, and capabilities of graphic output, the composite-mapping technique can readily be applied to mapping projects in sedimentary and crystalline terranes, as well as to maps showing mineral resource potential. ?? 1980 Springer-Verlag New York Inc.
On the Application of Contour Bumps for Transonic Drag Reduction(Invited)
NASA Technical Reports Server (NTRS)
Milholen, William E., II; Owens, Lewis R.
2005-01-01
The effect of discrete contour bumps on reducing the transonic drag at off-design conditions on an airfoil have been examined. The research focused on fully-turbulent flow conditions, at a realistic flight chord Reynolds number of 30 million. State-of-the-art computational fluid dynamics methods were used to design a new baseline airfoil, and a family of fixed contour bumps. The new configurations were experimentally evaluated in the 0.3-m Transonic Cryogenic Tunnel at the NASA Langley Research center, which utilizes an adaptive wall test section to minimize wall interference. The computational study showed that transonic drag reduction, on the order of 12% - 15%, was possible using a surface contour bump to spread a normal shock wave. The computational study also indicated that the divergence drag Mach number was increased for the contour bump applications. Preliminary analysis of the experimental data showed a similar contour bump effect, but this data needed to be further analyzed for residual wall interference corrections.
Ambusam, Subramaniam; Omar, Baharudin; Joseph, Leonard; Deepashini, Harithasan
2015-01-01
Computer users are exposed to work related neck disorders due to repetitive movement and static posture for prolonged period. Viewing document and typing simultaneously are one of the contributing factors for neck disorders. This preliminary study was conducted to evaluate the effects of the document holder on the postural neck muscles activity among computer users. Nine healthy participants with pre-defined inclusion and exclusion criteria were recruited for the study. Neck muscles activity were analyzed using the surface electromyography (EMG) in five different document location such as flat right, flat left, flat center, stand right and stand left during a 5 min typing task. The mean and standard deviation results showed a least amount of muscles activity using a document holder compared to without document holder. Nevertheless, the statistical analysis showed no significant differences between the using of a document holder. The effects of document holder on head excursion and neck muscle activity is recommended in clinical neck pain population.
Munabi, Ian G; Buwembo, William; Bajunirwe, Francis; Kitara, David Lagoro; Joseph, Ruberwa; Peter, Kawungezi; Obua, Celestino; Quinn, John; Mwaka, Erisa S
2015-02-25
Effective utilization of computers and their applications in medical education and research is of paramount importance to students. The objective of this study was to determine the association between owning a computer and use of computers for research data analysis and the other factors influencing health professions students' computer use for data analysis. We conducted a cross sectional study among undergraduate health professions students at three public universities in Uganda using a self-administered questionnaire. The questionnaire was composed of questions on participant demographics, students' participation in research, computer ownership, and use of computers for data analysis. Descriptive and inferential statistics (uni-variable and multi- level logistic regression analysis) were used to analyse data. The level of significance was set at 0.05. Six hundred (600) of 668 questionnaires were completed and returned (response rate 89.8%). A majority of respondents were male (68.8%) and 75.3% reported owning computers. Overall, 63.7% of respondents reported that they had ever done computer based data analysis. The following factors were significant predictors of having ever done computer based data analysis: ownership of a computer (adj. OR 1.80, p = 0.02), recently completed course in statistics (Adj. OR 1.48, p =0.04), and participation in research (Adj. OR 2.64, p <0.01). Owning a computer, participation in research and undertaking courses in research methods influence undergraduate students' use of computers for research data analysis. Students are increasingly participating in research, and thus need to have competencies for the successful conduct of research. Medical training institutions should encourage both curricular and extra-curricular efforts to enhance research capacity in line with the modern theories of adult learning.
Spectral stability of unitary network models
NASA Astrophysics Data System (ADS)
Asch, Joachim; Bourget, Olivier; Joye, Alain
2015-08-01
We review various unitary network models used in quantum computing, spectral analysis or condensed matter physics and establish relationships between them. We show that symmetric one-dimensional quantum walks are universal, as are CMV matrices. We prove spectral stability and propagation properties for general asymptotically uniform models by means of unitary Mourre theory.
Probability, Problem Solving, and "The Price is Right."
ERIC Educational Resources Information Center
Wood, Eric
1992-01-01
This article discusses the analysis of a decision-making process faced by contestants on the television game show "The Price is Right". The included analyses of the original and related problems concern pattern searching, inductive reasoning, quadratic functions, and graphing. Computer simulation programs in BASIC and tables of…
Effectiveness of Simulation in a Hybrid and Online Networking Course.
ERIC Educational Resources Information Center
Cameron, Brian H.
2003-01-01
Reports on a study that compares the performance of students enrolled in two sections of a Web-based computer networking course: one utilizing a simulation package and the second utilizing a static, graphical software package. Analysis shows statistically significant improvements in performance in the simulation group compared to the…
A Quantitative Empirical Analysis of the Abstract/Concrete Distinction
ERIC Educational Resources Information Center
Hill, Felix; Korhonen, Anna; Bentz, Christian
2014-01-01
This study presents original evidence that abstract and concrete concepts are organized and represented differently in the mind, based on analyses of thousands of concepts in publicly available data sets and computational resources. First, we show that abstract and concrete concepts have differing patterns of association with other concepts.…
Empirical Data Collection and Analysis Using Camtasia and Transana
ERIC Educational Resources Information Center
Thorsteinsson, Gisli; Page, Tom
2009-01-01
One of the possible techniques for collecting empirical data is video recordings of a computer screen with specific screen capture software. This method for collecting empirical data shows how students use the BSCWII (Be Smart Cooperate Worldwide--a web based collaboration/groupware environment) to coordinate their work and collaborate in…
Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.
ERIC Educational Resources Information Center
Lay, Robert S.; Maguire, John J.
1983-01-01
Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)
Using medical knowledge sources on handheld computers--a qualitative study among junior doctors.
Axelson, Christian; Wårdh, Inger; Strender, Lars-Erik; Nilsson, Gunnar
2007-09-01
The emergence of mobile computing could have an impact on how junior doctors learn. To exploit this opportunity it is essential to understand their information seeking process. To explore junior doctors' experiences of using medical knowledge sources on handheld computers. Interviews with five Swedish junior doctors. A qualitative manifest content analysis of a focus group interview followed by a qualitative latent content analysis of two individual interviews. A focus group interview showed that users were satisfied with access to handheld medical knowledge sources, but there was concern about contents, reliability and device dependency. Four categories emerged from individual interviews: (1) A feeling of uncertainty about using handheld technology in medical care; (2) A sense of security that handhelds can provide; (3) A need for contents to be personalized; (4) A degree of adaptability to make the handheld a versatile information tool. A theme was established to link the four categories together, as expressed in the Conclusion section. Junior doctors' experiences of using medical knowledge sources on handheld computers shed light on the need to decrease uncertainty about clinical decisions during medical internship, and to find ways to influence the level of self-confidence in the junior doctor's process of decision-making.
Stuckless, J.S.; VanTrump, G.
1979-01-01
A revised version of Graphic Normative Analysis Program (GNAP) has been developed to allow maximum flexibility in the evaluation of chemical data by the occasional computer user. GNAP calculates ClPW norms, Thornton and Tuttle's differentiation index, Barth's cations, Niggli values and values for variables defined by the user. Calculated values can be displayed graphically in X-Y plots or ternary diagrams. Plotting can be done on a line printer or Calcomp plotter with either weight percent or mole percent data. Modifications in the original program give the user some control over normative calculations for each sample. The number of user-defined variables that can be created from the data has been increased from ten to fifteen. Plotting and calculations can be based on the original data, data adjusted to sum to 100 percent, or data adjusted to sum to 100 percent without water. Analyses for which norms were previously not computable are now computed with footnotes that show excesses or deficiencies in oxides (or volatiles) not accounted for by the norm. This report contains a listing of the computer program, an explanation of the use of the program, and the two sample problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
Harnessing the power of emerging petascale platforms
NASA Astrophysics Data System (ADS)
Mellor-Crummey, John
2007-07-01
As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.
Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo
2008-01-01
Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.
Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio
2017-11-25
Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for integration site analysis, which allows gene therapy integration data to be handled in a cost and time effective fashion. Moreover, the web access of VISPA2 ( http://openserver.itb.cnr.it/vispa/ ) ensures accessibility and ease of usage to researches of a complex analytical tool. We released the source code of VISPA2 in a public repository ( https://bitbucket.org/andreacalabria/vispa2 ).
NASA Astrophysics Data System (ADS)
Nakashima, Yoshito; Komatsubara, Junko
Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.
Al-Anzi, Bader; Arpp, Patrick; Gerges, Sherif; Ormerod, Christopher; Olsman, Noah; Zinn, Kai
2015-05-01
An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae). A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.
NASA Astrophysics Data System (ADS)
Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag
2017-02-01
Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
CUDAICA: GPU Optimization of Infomax-ICA EEG Analysis
Raimondo, Federico; Kamienkowski, Juan E.; Sigman, Mariano; Fernandez Slezak, Diego
2012-01-01
In recent years, Independent Component Analysis (ICA) has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card) of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation. PMID:22811699
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2004-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Exploiting graphics processing units for computational biology and bioinformatics.
Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H
2010-09-01
Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.
A survey of current trends in computational drug repositioning.
Li, Jiao; Zheng, Si; Chen, Bin; Butte, Atul J; Swamidass, S Joshua; Lu, Zhiyong
2016-01-01
Computational drug repositioning or repurposing is a promising and efficient tool for discovering new uses from existing drugs and holds the great potential for precision medicine in the age of big data. The explosive growth of large-scale genomic and phenotypic data, as well as data of small molecular compounds with granted regulatory approval, is enabling new developments for computational repositioning. To achieve the shortest path toward new drug indications, advanced data processing and analysis strategies are critical for making sense of these heterogeneous molecular measurements. In this review, we show recent advancements in the critical areas of computational drug repositioning from multiple aspects. First, we summarize available data sources and the corresponding computational repositioning strategies. Second, we characterize the commonly used computational techniques. Third, we discuss validation strategies for repositioning studies, including both computational and experimental methods. Finally, we highlight potential opportunities and use-cases, including a few target areas such as cancers. We conclude with a brief discussion of the remaining challenges in computational drug repositioning. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.
Development of Reduced-Order Models for Aeroelastic and Flutter Prediction Using the CFL3Dv6.0 Code
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Bartels, Robert E.
2002-01-01
A reduced-order model (ROM) is developed for aeroelastic analysis using the CFL3D version 6.0 computational fluid dynamics (CFD) code, recently developed at the NASA Langley Research Center. This latest version of the flow solver includes a deforming mesh capability, a modal structural definition for nonlinear aeroelastic analyses, and a parallelization capability that provides a significant increase in computational efficiency. Flutter results for the AGARD 445.6 Wing computed using CFL3D v6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are then computed using the CFL3Dv6 code and transformed into state-space form. Important numerical issues associated with the computation of the impulse responses are presented. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is used to rapidly compute aeroelastic transients including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly.
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
Quantification of indium in steel using PIXE
NASA Astrophysics Data System (ADS)
Oliver, A.; Miranda, J.; Rickards, J.; Cheang, J. C.
1989-04-01
The quantitative analysis of steel for endodontics tools was carried out using low-energy protons (≤ 700 keV). A computer program for a thick-target analysis which includes enhancement due to secondary fluorescence was used. In this experiment the L-lines of indium are enhanced due to the proximity of other elements' K-lines to the indium absorption edge. The results show that the ionization cross section expression employed to evaluate this magnitude is important.
On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.
NASA Astrophysics Data System (ADS)
Gao, J. L.
2002-04-01
In this article, we present a system-level characterization of the energy consumption for sensor network application scenarios. We compute a power efficiency metric -- average watt-per-meter -- for each radio transmission and extend this local metric to find the global energy consumption. This analysis shows how overall energy consumption varies with transceiver characteristics, node density, data traffic distribution, and base-station location.
Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds
NASA Astrophysics Data System (ADS)
Cheng, Tian
Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A fast Fourier transform (FFT) method is presented to avoid the root-searching process in the inverse Laplace transform of multilayered walls. Generalized explicit FFT formulae for calculating the discrete Fourier transform (DFT) are developed for the first time. They can largely facilitate the implementation of FFT. The new method also provides a basis for generating the symbolic response factors. Validation simulations show that it can generate the response factors as accurate as the analytical solutions. The second method is for direct estimation of annual or seasonal cooling loads without the need for tedious hourly energy simulations. It is validated by hourly simulation results with DOE2. Then symbolic long-term cooling load can be created by combining the two methods with thermal network analysis. The symbolic long-term cooling load can keep the design parameters of interest as symbols, which is particularly useful for the optimal design and sensitivity analysis. The methodology is applied to an office building in Hong Kong for the optimal design of building envelope. Design variables such as window-to-wall ratio, building orientation, and glazing optical and thermal properties are included in the study. Results show that the selected design values could significantly impact the energy performance of windows, and the optimal design of side-lit buildings could greatly enhance energy savings. The application example also demonstrates that the developed methodology significantly facilitates the optimal building design and sensitivity analysis, and leads to high computational efficiency.
Computer analysis of Holter electrocardiogram.
Yanaga, T; Adachi, M; Sato, Y; Ichimaru, Y; Otsuka, K
1994-10-01
Computer analysis is indispensable for the interpretation of Holter ECG, because it includes a large quantity of data. Computer analysis of Holter ECG is similar to that of conventional ECG, however, in computer analysis of Holter ECG, there are some difficulties such as many noise, limited analyzing time and voluminous data. The main topics in computer analysis of Holter ECG will be arrhythmias, ST-T changes, heart rate variability, QT interval, late potential and construction of database. Although many papers have been published on the computer analysis of Holter ECG, some of the papers was reviewed briefly in the present paper. We have studied on computer analysis of VPCs, ST-T changes, heart rate variability, QT interval and Cheyne-Stokes respiration during 24-hour ambulatory ECG monitoring. Further, we have studied on ambulatory palmar sweating for the evaluation of mental stress during a day. In future, the development of "the integrated Holter system", which enables the evaluation of ventricular vulnerability and modulating factor such as psychoneural hypersensitivity may be important.
Computer analysis of lighting style in fine art: steps towards inter-artist studies
NASA Astrophysics Data System (ADS)
Stork, David G.
2011-03-01
Stylometry in visual art-the mathematical description of artists' styles - has been based on a number of properties of works, such as color, brush stroke shape, visual texture, and measures of contours' curvatures. We introduce the concept of quantitative measures of lighting, such as statistical descriptions of spatial coherence, diuseness, and so forth, as properties of artistic style. Some artists of the high Renaissance, such as Leonardo, worked from nature and strove to render illumination "faithfully" photorealists, such as Richard Estes, worked from photographs and duplicated the "physics based" lighting accurately. As such, each had dierent motivations, methodologies, stagings, and "accuracies" in rendering lighting clues. Perceptual studies show that observers are poor judges of properties of lighting in photographs such as consistency (and thus by extension in paintings as well); computer methods such as rigorous cast-shadow analysis, occluding-contour analysis and spherical harmonic based estimation of light fields can be quite accurate. For this reasons, computer lighting analysis can provide a new tools for art historical studies. We review lighting analysis in paintings such as Vermeer's Girl with a pearl earring, de la Tour's Christ in the carpenter's studio, Caravaggio's Magdalen with the smoking flame and Calling of St. Matthew) and extend our corpus to works where lighting coherence is of interest to art historians, such as Caravaggio's Adoration of the Shepherds or Nativity (1609) in the Capuchin church of Santa Maria degli Angeli. Our measure of lighting coherence may help reveal the working methods of some artists and in diachronic studies of individual artists. We speculate on artists and art historical questions that may ultimately profit from future renements to these new computational tools.
Computation of forces from deformed visco-elastic biological tissues
NASA Astrophysics Data System (ADS)
Muñoz, José J.; Amat, David; Conte, Vito
2018-04-01
We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
FTMP - A highly reliable Fault-Tolerant Multiprocessor for aircraft
NASA Technical Reports Server (NTRS)
Hopkins, A. L., Jr.; Smith, T. B., III; Lala, J. H.
1978-01-01
The FTMP (Fault-Tolerant Multiprocessor) is a complex multiprocessor computer that employs a form of redundancy related to systems considered by Mathur (1971), in which each major module can substitute for any other module of the same type. Despite the conceptual simplicity of the redundancy form, the implementation has many intricacies owing partly to the low target failure rate, and partly to the difficulty of eliminating single-fault vulnerability. An extensive analysis of the computer through the use of such modeling techniques as Markov processes and combinatorial mathematics shows that for random hard faults the computer can meet its requirements. It is also shown that the maintenance scheduled at intervals of 200 hr or more can be adequate most of the time.
Adaptive relaxation for the steady-state analysis of Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham
1994-01-01
We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.
Life Prediction for a CMC Component Using the NASALIFE Computer Code
NASA Technical Reports Server (NTRS)
Gyekenyesi, John Z.; Murthy, Pappu L. N.; Mital, Subodh K.
2005-01-01
The computer code, NASALIFE, was used to provide estimates for life of an SiC/SiC stator vane under varying thermomechanical loading conditions. The primary intention of this effort is to show how the computer code NASALIFE can be used to provide reasonable estimates of life for practical propulsion system components made of advanced ceramic matrix composites (CMC). Simple loading conditions provided readily observable and acceptable life predictions. Varying the loading conditions such that low cycle fatigue and creep were affected independently provided expected trends in the results for life due to varying loads and life due to creep. Analysis was based on idealized empirical data for the 9/99 Melt Infiltrated SiC fiber reinforced SiC.
Computation and analysis for a constrained entropy optimization problem in finance
NASA Astrophysics Data System (ADS)
He, Changhong; Coleman, Thomas F.; Li, Yuying
2008-12-01
In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
A node-wise analysis of the uterine muscle networks for pregnancy monitoring.
Nader, N; Hassan, M; Falou, W; Marque, C; Khalil, M
2016-08-01
The recent past years have seen a noticeable increase of interest in the correlation analysis of electrohysterographic (EHG) signals in the perspective of improving the pregnancy monitoring. Here we propose a new approach based on the functional connectivity between multichannel (4×4 matrix) EHG signals recorded from the women's abdomen. The proposed pipeline includes i) the computation of the statistical couplings between the multichannel EHG signals, ii) the characterization of the connectivity matrices, computed by using the imaginary part of the coherence, based on the graph-theory analysis and iii) the use of these measures for pregnancy monitoring. The method was evaluated on a dataset of EHGs, in order to track the correlation between EHGs collected by each electrode of the matrix (called `node-wise' analysis) and follow their evolution along weeks before labor. Results showed that the strength of each node significantly increases from pregnancy to labor. Electrodes located on the median vertical axis of the uterus seemed to be the more discriminant. We speculate that the network-based analysis can be a very promising tool to improve pregnancy monitoring.
Analysis of conditions and organization of work of notebook computer users.
Malińska, Marzena; Bugajska, Joanna; Kamińska, Joanna; Jędryka-Góral, Anna
2012-01-01
The aim of this study was to evaluate working conditions with a notebook computer (notebook) as a potential cause of musculoskeletal disorders. The study had 2 stages. The first one was a questionnaire survey among 300 notebook users. The next stage was an expert analysis of 53 randomly selected workstations. The questionnaire survey included questions about the participants, their working conditions, work organization and also duration of work with a notebook. The results of the research showed that most examined operators used a notebook as a basic working tool. The most important irregularities included an unadjustable working surface, unadjustable height of the seat pan and backrest, unadjustable height and distance between the armrests and no additional ergonomic devices (external keyboard, docking station, notebook stand or footstool).
The software analysis project for the Office of Human Resources
NASA Technical Reports Server (NTRS)
Tureman, Robert L., Jr.
1994-01-01
There were two major sections of the project for the Office of Human Resources (OHR). The first section was to conduct a planning study to analyze software use with the goal of recommending software purchases and determining whether the need exists for a file server. The second section was analysis and distribution planning for retirement planning computer program entitled VISION provided by NASA Headquarters. The software planning study was developed to help OHR analyze the current administrative desktop computing environment and make decisions regarding software acquisition and implementation. There were three major areas addressed by the study: current environment new software requirements, and strategies regarding the implementation of a server in the Office. To gather data on current environment, employees were surveyed and an inventory of computers were produced. The surveys were compiled and analyzed by the ASEE fellow with interpretation help by OHR staff. New software requirements represented a compilation and analysis of the surveyed requests of OHR personnel. Finally, the information on the use of a server represents research done by the ASEE fellow and analysis of survey data to determine software requirements for a server. This included selection of a methodology to estimate the number of copies of each software program required given current use and estimated growth. The report presents the results of the computing survey, a description of the current computing environment, recommenations for changes in the computing environment, current software needs, management advantages of using a server, and management considerations in the implementation of a server. In addition, detailed specifications were presented for the hardware and software recommendations to offer a complete picture to OHR management. The retirement planning computer program available to NASA employees will aid in long-range retirement planning. The intended audience is the NASA civil service employee with several years until retirement. The employee enters current salary and savings information as well as goals concerning salary at retirement, assumptions on inflation, and the return on investments. The program produces a picture of the employee's retirement income from all sources based on the assumptions entered. A session showing features of the program was conducted for key personnel at the Center. After analysis, it was decided to offer the program through the Learning Center starting in August 1994.
Decaestecker, C; Salmon, I; Camby, I; Dewitte, O; Pasteels, J L; Brotchi, J; Van Ham, P; Kiss, R
1995-05-01
The present work investigates whether computer-assisted techniques can contribute any significant information to the characterization of astrocytic tumor aggressiveness. Two complementary computer-assisted methods were used. The first method made use of the digital image analysis of Feulgen-stained nuclei, making it possible to compute 15 morphonuclear and 8 nuclear DNA content-related (ploidy level) parameters. The second method enabled the most discriminatory parameters to be determined. This second method is the Decision Tree technique, which forms part of the Supervised Learning Algorithms. These two techniques were applied to a series of 250 supratentorial astrocytic tumors of the adult. This series included 39 low-grade (astrocytomas, AST) and 211 high-grade (47 anaplastic astrocytomas, ANA, and 164 glioblastomas, GBM) astrocytic tumors. The results show that some AST, ANA and GBM did not fit within simple logical rules. These "complex" cases were labeled NC-AST, NC-ANA and NC-GBM because they were "non-classical" (NC) with respect to their cytological features. An analysis of survival data revealed that the patients with NC-GBM had the same survival period as patients with GBM. In sharp contrast, patients with ANA survived significantly longer than patients with NC-ANA. In fact, the patients with ANA had the same survival period as patients who died from AST, while the patients with NC-ANA had a survival period similar to those with GBM. All these data show that the computer-assisted techniques used in this study can actually provide the pathologist with significant information on the characterization of astrocytic tumor aggressiveness.