NEW GIS WATERSHED ANALYSIS TOOLS FOR SOIL CHARACTERIZATION AND EROSION AND SEDIMENTATION MODELING
A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed which utilizes a suite of automated scripts and a pair of processing-intensive executable programs operating on a personal computer platform.
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
NASA Technical Reports Server (NTRS)
Peterson, R. C.; Title, A. M.
1975-01-01
A total reduction procedure, notable for its use of a computer-controlled microdensitometer for semi-automatically tracing curved spectra, is applied to distorted high-dispersion echelle spectra recorded by an image tube. Microdensitometer specifications are presented and the FORTRAN, TRACEN and SPOTS programs are outlined. The intensity spectrum of the photographic or electrographic plate is plotted on a graphic display. The time requirements are discussed in detail.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Randomization Procedures Applied to Analysis of Ballistic Data
1991-06-01
test,;;15. NUMBER OF PAGES data analysis; computationally intensive statistics ; randomization tests; permutation tests; 16 nonparametric statistics ...be 0.13. 8 Any reasonable statistical procedure would fail to support the notion of improvement of dynamic over standard indexing based on this data ...AD-A238 389 TECHNICAL REPORT BRL-TR-3245 iBRL RANDOMIZATION PROCEDURES APPLIED TO ANALYSIS OF BALLISTIC DATA MALCOLM S. TAYLOR BARRY A. BODT - JUNE
Knowledge Intensive Programming: A New Educational Computing Environment.
ERIC Educational Resources Information Center
Seidman, Robert H.
1990-01-01
Comparison of the process of problem solving using a conventional procedural computer programing language (e.g., BASIC, Logo, Pascal), with the process when using a logic programing language (i.e., Prolog), focuses on the potential of the two types of programing languages to facilitate the transfer of problem-solving skills, cognitive development,…
Castaño-Díez, Daniel
2017-01-01
Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance. PMID:28580909
Castaño-Díez, Daniel
2017-06-01
Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
NASA Technical Reports Server (NTRS)
Roskam, J.
1983-01-01
The transmission loss characteristics of panels using the acoustic intensity technique is presented. The theoretical formulation, installation of hardware, modifications to the test facility, and development of computer programs and test procedures are described. A listing of all the programs is also provided. The initial test results indicate that the acoustic intensity technique is easily adapted to measure transmission loss characteristics of panels. Use of this method will give average transmission loss values. The fixtures developed to position the microphones along the grid points are very useful in plotting the intensity maps of vibrating panels.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
SURROGATE MODEL DEVELOPMENT AND VALIDATION FOR RELIABILITY ANALYSIS OF REACTOR PRESSURE VESSELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, William M.; Riley, Matthew E.; Spencer, Benjamin W.
In nuclear light water reactors (LWRs), the reactor coolant, core and shroud are contained within a massive, thick walled steel vessel known as a reactor pressure vessel (RPV). Given the tremendous size of these structures, RPVs typically contain a large population of pre-existing flaws introduced in the manufacturing process. After many years of operation, irradiation-induced embrittlement makes these vessels increasingly susceptible to fracture initiation at the locations of the pre-existing flaws. Because of the uncertainty in the loading conditions, flaw characteristics and material properties, probabilistic methods are widely accepted and used in assessing RPV integrity. The Fracture Analysis of Vesselsmore » – Oak Ridge (FAVOR) computer program developed by researchers at Oak Ridge National Laboratory is widely used for this purpose. This program can be used in order to perform deterministic and probabilistic risk-informed analyses of the structural integrity of an RPV subjected to a range of thermal-hydraulic events. FAVOR uses a one-dimensional representation of the global response of the RPV, which is appropriate for the beltline region, which experiences the most embrittlement, and employs an influence coefficient technique to rapidly compute stress intensity factors for axis-aligned surface-breaking flaws. The Grizzly code is currently under development at Idaho National Laboratory (INL) to be used as a general multiphysics simulation tool to study a variety of degradation mechanisms in nuclear power plant components. The first application of Grizzly has been to study fracture in embrittled RPVs. Grizzly can be used to model the thermo-mechanical response of an RPV under transient conditions observed in a pressurized thermal shock (PTS) scenario. The global response of the vessel provides boundary conditions for local 3D models of the material in the vicinity of a flaw. Fracture domain integrals are computed to obtain stress intensity factors, which can in turn be used to assess whether a fracture would initiate at a pre-existing flaw. To use Grizzly for probabilistic analysis, it is necessary to have a way to rapidly evaluate stress intensity factors. To accomplish this goal, a reduced order model (ROM) has been developed to efficiently represent the behavior of a detailed 3D Grizzly model used to calculate fracture parameters. This approach uses the stress intensity factor influence coefficient method that has been used with great success in FAVOR. Instead of interpolating between tabulated solutions, as FAVOR does, the ROM approach uses a response surface methodology to compute fracture solutions based on a sampled set of results used to train the ROM. The main advantages of this approach are that the process of generating the training data can be fully automated, and the procedure can be readily used to consider more general flaw configurations. This paper demonstrates the procedure used to generate a ROM to rapidly compute stress intensity factors for axis-aligned flaws. The results from this procedure are in good agreement with those produced using the traditional influence coefficient interpolation procedure, which gives confidence in this method. This paves the way for applying this procedure for more general flaw configurations.« less
TORC3: Token-ring clearing heuristic for currency circulation
NASA Astrophysics Data System (ADS)
Humes, Carlos, Jr.; Lauretto, Marcelo S.; Nakano, Fábio; Pereira, Carlos A. B.; Rafare, Guilherme F. G.; Stern, Julio Michael
2012-10-01
Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents' risk of default.
NASA Astrophysics Data System (ADS)
Arndt, U. W.; Willis, B. T. M.
2009-06-01
Preface; Acknowledgements; Part I. Introduction; Part II. Diffraction Geometry; Part III. The Design of Diffractometers; Part IV. Detectors; Part V. Electronic Circuits; Part VI. The Production of the Primary Beam (X-rays); Part VII. The Production of the Primary Beam (Neutrons); Part VIII. The Background; Part IX. Systematic Errors in Measuring Relative Integrated Intensities; Part X. Procedure for Measuring Integrated Intensities; Part XI. Derivation and Accuracy of Structure Factors; Part XII. Computer Programs and On-line Control; Appendix; References; Index.
Agarwal-Kozlowski, K; Lorke, D E; Habermann, C R; Schulte am Esch, J; Beck, H
2011-08-01
We retrospectively evaluated the safety and efficacy of computed tomography-guided placement of percutaneous catheters in close proximity to the thoracic sympathetic chain by rating pain intensity and systematically reviewing charts and computed tomography scans. Interventions were performed 322 times in 293 patients of mean (SD) age 59.4 (17.0) years, and male to female ratio 105:188, with postherpetic neuralgia (n = 103, 35.1%), various neuralgias (n = 88, 30.0%), complex regional pain syndrome (n = 69, 23.6%), facial pain (n = 17, 5.8%), ischaemic limb pain (n = 7, 2.4%), phantom limb pain (n = 4, 1.4%), pain following cerebrovascular accident (n = 2, 0.7%), syringomyelia (n = 2, 0.7%) and palmar hyperhidrosis (n = 1, 0.3%). The interventions were associated with a total of 23 adverse events (7.1% of all procedures): catheter dislocation (n = 9, 2.8%); increase in pain intensity (n = 8, 2.5%); pneumothorax (n = 3, 0.9%); local infection (n = 2, 0.6%); and puncture of the spinal cord (n = 1, 0.3%). Continuous infusion of 10 ml.h(-1) ropivacaine 0.2% through the catheters decreased median (IQR [range]) pain scores from 8 (6-9 [2-10]) to 2 (1-3 [0-10]) (p < 0.0001). Chemical neuroablation was necessary in 137 patients (46.8%). We conclude that this procedure leads to a significant reduction of pain intensity in otherwise obstinate burning or stabbing pain and is associated with few hazards. © 2011 The Authors. Anaesthesia © 2011 The Association of Anaesthetists of Great Britain and Ireland.
NASA Technical Reports Server (NTRS)
Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt
1991-01-01
The USDA presently uses labor-intensive photographic interpretation procedures to delineate large geographical areas into manageable size sampling units for the estimation of domestic crop and livestock production. Computer software to automate the boundary delineation procedure, called the computer-assisted stratification and sampling (CASS) system, was developed using a Hewlett Packard color-graphics workstation. The CASS procedures display Thematic Mapper (TM) satellite digital imagery on a graphics display workstation as the backdrop for the onscreen delineation of sampling units. USGS Digital Line Graph (DLG) data for roads and waterways are displayed over the TM imagery to aid in identifying potential sample unit boundaries. Initial analysis conducted with three Missouri counties indicated that CASS was six times faster than the manual techniques in delineating sampling units.
A lightweight distributed framework for computational offloading in mobile cloud computing.
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.
A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245
An automated optical wedge calibrator for Dobson ozone spectrophotometers
NASA Technical Reports Server (NTRS)
Evans, R. D.; Komhyr, W. D.; Grass, R. D.
1994-01-01
The Dobson ozone spectrophotometer measures the difference of intensity between selected wavelengths in the ultraviolet. The method uses an optical attenuator (the 'Wedge') in this measurement. The knowledge of the relationship of the wedge position to the attenuation is critical to the correct calculation of ozone from the measurement. The procedure to determine this relationship is time-consuming, and requires a highly skilled person to perform it correctly. The relationship has been found to change with time. For reliable ozone values, the procedure should be done on a Dobson instrument at regular intervals. Due to the skill and time necessary to perform this procedure, many instruments have gone as long as 15 years between procedures. This article describes an apparatus that performs the procedure under computer control, and is adaptable to the majority of existing Dobson instruments. Part of the apparatus is usable for normal operation of the Dobson instrument, and would allow computer collection of the data and real-time ozone measurements.
Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André
2016-01-01
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Procedural wound geometry and blood flow generation for medical training simulators
NASA Astrophysics Data System (ADS)
Aras, Rifat; Shen, Yuzhong; Li, Jiang
2012-02-01
Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.
A probabilistic seismic risk assessment procedure for nuclear power plants: (II) Application
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2011-01-01
This paper presents the procedures and results of intensity- and time-based seismic risk assessments of a sample nuclear power plant (NPP) to demonstrate the risk-assessment methodology proposed in its companion paper. The intensity-based assessments include three sets of sensitivity studies to identify the impact of the following factors on the seismic vulnerability of the sample NPP, namely: (1) the description of fragility curves for primary and secondary components of NPPs, (2) the number of simulations of NPP response required for risk assessment, and (3) the correlation in responses between NPP components. The time-based assessment is performed as a series of intensity-based assessments. The studies illustrate the utility of the response-based fragility curves and the inclusion of the correlation in the responses of NPP components directly in the risk computation. ?? 2011 Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliaga, José I., E-mail: aliaga@uji.es; Alonso, Pedro; Badía, José M.
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousandsmore » degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.« less
AUTOMATED GIS WATERSHED ANALYSIS TOOLS FOR RUSLE/SEDMOD SOIL EROSION AND SEDIMENTATION MODELING
A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed using a suite of automated Arc Macro Language (AML ) scripts and a pair of processing- intensive ANSI C++ executable programs operating on an ESRI ArcGIS 8.x Workstation platform...
NASA Technical Reports Server (NTRS)
Chackerian, C., Jr.; Farreng, R.; Guelachvili, G.; Rossetti, C.; Urban, W.
1984-01-01
Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least squares fitting procedure to obtain the ground electronic state electric-dipole-moment function of carbon monoxide valid in the range of nuclear oscillation (0.87 to 1.01 A) of about the V = 38th vibrational level. Mechanical anharmonicity intensity factors, H, are computed from this function for delta V + = 1, 2, 3, with or = to 38.
NASA Technical Reports Server (NTRS)
Kottarchyk, M.; Chen, S.-H.; Asano, S.
1979-01-01
The study tests the accuracy of the Rayleigh-Gans-Debye (RGD) approximation against a rigorous scattering theory calculation for a simplified model of E. coli (about 1 micron in size) - a solid spheroid. A general procedure is formulated whereby the scattered field amplitude correlation function, for both polarized and depolarized contributions, can be computed for a collection of particles. An explicit formula is presented for the scattered intensity, both polarized and depolarized, for a collection of randomly diffusing or moving particles. Two specific cases for the intermediate scattering functions are considered: diffusing particles and freely moving particles with a Maxwellian speed distribution. The formalism is applied to microorganisms suspended in a liquid medium. Sensitivity studies revealed that for values of the relative index of refraction greater than 1.03, RGD could be in serious error in computing the intensity as well as correlation functions.
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
A description is given of an algorithm for computing ro-vibrational energy levels for tetratomic molecules. The expressions required for evaluating transition intensities are also given. The variational principle is used to determine the energy levels and the kinetic energy operator is simple and evaluated exactly. The computational procedure is split up into the determination of one dimensional radial basis functions, the computation of a contracted rotational-bending basis, followed by a final variational step coupling all degrees of freedom. An angular basis is proposed whereby the rotational-bending contraction takes place in three steps. Angular matrix elements of the potential are evaluated by expansion in terms of a suitable basis and the angular integrals are given in a factorized form which simplifies their evaluation. The basis functions in the final variational step have the full permutation symmetries of the identical particles. Sample results are given for HCCH and BH3.
Computer-aided boundary delineation of agricultural lands
NASA Technical Reports Server (NTRS)
Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt
1989-01-01
The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.
Quantitative Assay for Starch by Colorimetry Using a Desktop Scanner
ERIC Educational Resources Information Center
Matthews, Kurt R.; Landmark, James D.; Stickle, Douglas F.
2004-01-01
The procedure to produce standard curve for starch concentration measurement by image analysis using a color scanner and computer for data acquisition and color analysis is described. Color analysis is performed by a Visual Basic program that measures red, green, and blue (RGB) color intensities for pixels within the scanner image.
1973-10-01
intensity computation are shown in Figure 17. Using the same formal procedure outlined by Winne & Wundt . a notch geometry can be chosen to induce...Nitride at Elevated Temperatures . Winne, D.H. and Wundt , B.M., "Application of the Gnffith-Irwm Theory of Crack Propagation to the Bursting Behavior
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I
2015-11-03
We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel
2013-03-01
In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Measuring and Estimating Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2013-01-01
Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.
Tafelski, Sascha; Kerper, Léonie F; Salz, Anna-Lena; Spies, Claudia; Reuter, Eva; Nachtigall, Irit; Schäfer, Michael; Krannich, Alexander; Krampe, Henning
2016-07-01
Previous studies reported conflicting results concerning different pain perceptions of men and women. Recent research found higher pain levels in men after major surgery, contrasted by women after minor procedures. This trial investigates differences in self-reported preoperative pain intensity between genders before surgery.Patients were enrolled in 2011 and 2012 presenting for preoperative evaluation at the anesthesiological assessment clinic at Charité University hospital. Out of 5102 patients completing a computer-assisted self-assessment, 3042 surgical patients with any preoperative pain were included into this prospective observational clinical study. Preoperative pain intensity (0-100 VAS, visual analog scale) was evaluated integrating psychological cofactors into analysis.Women reported higher preoperative pain intensity than men with median VAS scores of 30 (25th-75th percentiles: 10-52) versus 21 (10-46) (P < 0.001). Adjusted multiple regression analysis showed that female gender remained statistically significantly associated with higher pain intensity (P < 0.001). Gender differences were consistent across several subgroups especially with varying patterns in elderly. Women scheduled for minor and moderate surgical procedures showed largest differences in overall pain compared to men.This large clinical study observed significantly higher preoperative pain intensity in female surgical patients. This gender difference was larger in the elderly potentially contradicting the current hypothesis of a primary sex-hormone derived effect. The observed variability in specific patient subgroups may help to explain heterogeneous findings of previous studies.
Linear solver performance in elastoplastic problem solution on GPU cluster
NASA Astrophysics Data System (ADS)
Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.
2017-12-01
Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.
Room temperature linelists for CO2 asymmetric isotopologues with ab initio computed intensities
NASA Astrophysics Data System (ADS)
Zak, Emil J.; Tennyson, Jonathan; Polyansky, Oleg L.; Lodi, Lorenzo; Zobov, Nikolay F.; Tashkun, Sergei A.; Perevalov, Valery I.
2017-12-01
The present paper reports room temperature line lists for six asymmetric isotopologues of carbon dioxide: 16O12C18O (628), 16O12C17O (627), 16O13C18O (638),16O13C17O (637), 17O12C18O (728) and 17O13C18O (738), covering the range 0-8000 cm-1. Variational rotation-vibration wavefunctions and energy levels are computed using the DVR3D software suite and a high quality semi-empirical potential energy surface (PES), followed by computation of intensities using an ab initio dipole moment surface (DMS). A theoretical procedure for quantifying sensitivity of line intensities to minor distortions of the PES/DMS renders our theoretical model as critically evaluated. Several recent high quality measurements and theoretical approaches are discussed to provide a benchmark of our results against the most accurate available data. Indeed, the thesis of transferability of accuracy among different isotopologues with the use of mass-independent PES is supported by several examples. Thereby, we conclude that the majority of line intensities for strong bands are predicted with sub-percent accuracy. Accurate line positions are generated using an effective Hamiltonian, constructed from the latest experiments. This study completes the list of relevant isotopologues of carbon dioxide; these line lists are available to remote sensing studies and inclusion in databases.
NASA Astrophysics Data System (ADS)
Cheng, Tian-Le; Ma, Fengde D.; Zhou, Jie E.; Jennings, Guy; Ren, Yang; Jin, Yongmei M.; Wang, Yu U.
2012-01-01
Diffuse scattering contains rich information on various structural disorders, thus providing a useful means to study the nanoscale structural deviations from the average crystal structures determined by Bragg peak analysis. Extraction of maximal information from diffuse scattering requires concerted efforts in high-quality three-dimensional (3D) data measurement, quantitative data analysis and visualization, theoretical interpretation, and computer simulations. Such an endeavor is undertaken to study the correlated dynamic atomic position fluctuations caused by thermal vibrations (phonons) in precursor state of shape-memory alloys. High-quality 3D diffuse scattering intensity data around representative Bragg peaks are collected by using in situ high-energy synchrotron x-ray diffraction and two-dimensional digital x-ray detector (image plate). Computational algorithms and codes are developed to construct the 3D reciprocal-space map of diffuse scattering intensity distribution from the measured data, which are further visualized and quantitatively analyzed to reveal in situ physical behaviors. Diffuse scattering intensity distribution is explicitly formulated in terms of atomic position fluctuations to interpret the experimental observations and identify the most relevant physical mechanisms, which help set up reduced structural models with minimal parameters to be efficiently determined by computer simulations. Such combined procedures are demonstrated by a study of phonon softening phenomenon in precursor state and premartensitic transformation of Ni-Mn-Ga shape-memory alloy.
Advances and Limitations of Modern Macroseismic Data Gathering
NASA Astrophysics Data System (ADS)
Wald, D. J.; Dewey, J. W.; Quitoriano, V. P. R.
2016-12-01
All macroseismic data are not created equal. At about the time that the European Macroseismic Scale 1998 (EMS-98; itself a revision of EMS-92) formalized a procedure to account for building vulnerability and damage grade statistics in assigning intensities from traditional field observations, a parallel universe of internet-based intensity reporting was coming online. The divergence of intensities assigned by field reconnaissance and intensities based on volunteered reports poses unique challenges. U.S. Geological Survey's Did You Feel It? (DYFI) and its Italian (National Institute of Geophysics and Volcanology) counterpart use questionnaires based on the traditional format, submitted by volunteers. The Italian strategy uses fuzzy logic to assign integer values of intensity from questionnaire responses, whereas DYFI assigns weights to macroseismic effects and computes real-valued intensities to a 0.1 MMI unit precision. DYFI responses may be grouped together by postal code, or by smaller latitude-longitude boxes; calculated intensities may vary depending on how observations are grouped. New smartphone-based procedures depart further from tradition by asking respondents to select from cartoons corresponding to various intensity levels that best fit their experience. While nearly instantaneous, these thumbnail-based intensities are strictly integer values and do not record specific macroseismic effects. Finally, a recent variation on traditional intensity assignments derives intensities not from field surveys or questionnaires sent to target audiences but rather from media reports, photojournalism, and internet posts that may or may not constitute the representative observations needed for consistent EMS-98 assignments. We review these issues and suggest due-diligence strategies for utilizing varied macroseismic data sets within real-time applications and in quantitative hazard and engineering analyses.
HIFU procedures at moderate intensities--effect of large blood vessels.
Hariharan, P; Myers, M R; Banerjee, R K
2007-06-21
A three-dimensional computational model is presented for studying the efficacy of high-intensity focused ultrasound (HIFU) procedures targeted near large blood vessels. The analysis applies to procedures performed at intensities below the threshold for cavitation, boiling and highly nonlinear propagation, but high enough to increase tissue temperature a few degrees per second. The model is based upon the linearized KZK equation and the bioheat equation in tissue. In the blood vessel the momentum and energy equations are satisfied. The model is first validated in a tissue phantom, to verify the absence of bubble formation and nonlinear effects. Temperature rise and lesion-volume calculations are then shown for different beam locations and orientations relative to a large vessel. Both single and multiple ablations are considered. Results show that when the vessel is located within about a beam width (few mm) of the ultrasound beam, significant reduction in lesion volume is observed due to blood flow. However, for gaps larger than a beam width, blood flow has no major effect on the lesion formation. Under the clinically representative conditions considered, the lesion volume is reduced about 40% (relative to the no-flow case) when the beam is parallel to the blood vessel, compared to about 20% for a perpendicular orientation. Procedures involving multiple ablation sites are affected less by blood flow than single ablations. The model also suggests that optimally focused transducers can generate lesions that are significantly larger (>2 times) than the ones produced by highly focused beams.
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Stress Intensity Factor Plasticity Correction for Flaws in Stress Concentration Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, E.; Wilson, W.K.
2000-02-01
Plasticity corrections to elastically computed stress intensity factors are often included in brittle fracture evaluation procedures. These corrections are based on the existence of a plastic zone in the vicinity of the crack tip. Such a plastic zone correction is included in the flaw evaluation procedure of Appendix A to Section XI of the ASME Boiler and Pressure Vessel Code. Plasticity effects from the results of elastic and elastic-plastic explicit flaw finite element analyses are examined for various size cracks emanating from the root of a notch in a panel and for cracks located at fillet fadii. The results ofmore » these caluclations provide conditions under which the crack-tip plastic zone correction based on the Irwin plastic zone size overestimates the plasticity effect for crack-like flaws embedded in stress concentration regions in which the elastically computed stress exceeds the yield strength of the material. A failure assessment diagram (FAD) curve is employed to graphically c haracterize the effect of plasticity on the crack driving force. The Option 1 FAD curve of the Level 3 advanced fracture assessment procedure of British Standard PD 6493:1991, adjusted for stress concentration effects by a term that is a function of the applied load and the ratio of the local radius of curvature at the flaw location to the flaw depth, provides a satisfactory bound to all the FAD curves derived from the explicit flaw finite element calculations. The adjusted FAD curve is a less restrictive plasticity correction than the plastic zone correction of Section XI for flaws embedded in plastic zones at geometric stress concentrators. This enables unnecessary conservatism to be removed from flaw evaluation procedures that utilize plasticity corrections.« less
The reduced basis method for the electric field integral equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less
SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT
Halliburton, Sandra S.; Abbara, Suhny; Chen, Marcus Y.; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L.; Shaw, Leslee J.; Hausleiter, Jörg
2012-01-01
Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring. PMID:21723512
Determination of stress intensity factors for interface cracks under mixed-mode loading
NASA Technical Reports Server (NTRS)
Naik, Rajiv A.; Crews, John H., Jr.
1992-01-01
A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.
Tuomivaara, S; Ketola, R; Huuhtanen, P; Toivonen, R
2008-02-01
Musculoskeletal strain and other symptoms are common in visual display unit (VDU) work. Psychosocial factors are closely related to the outcome and experience of musculoskeletal strain. The user-computer relationship from the viewpoint of the quality of perceived competence in computer use was assessed as a psychosocial stress indicator. It was assumed that the perceived competence in computer use moderates the experience of musculoskeletal strain and the success of the ergonomics intervention. The participants (n = 124, female 58%, male 42%) worked with VDU for more than 4 h per week. They took part in an ergonomics intervention and were allocated into three groups: intensive; education; and reference group. Musculoskeletal strain, the level of ergonomics of the workstation assessed by the experts in ergonomics and amount of VDU work were estimated at the baseline and at the 10-month follow-up. Age, gender and the perceived competence in computer use were assessed at the baseline. The perceived competence in computer use predicted strain in the upper and the lower part of the body at the follow-up. The interaction effect shows that the intensive ergonomics intervention procedure was the most effective among participants with high perceived competence. The interpretation of the results was that an anxiety-provoking and stressful user-computer relationship prevented the participants from being motivated and from learning in the ergonomics intervention. In the intervention it is important to increase the computer competence along with the improvements of physical workstation and work organization.
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
Anisotropic scattering of discrete particle arrays.
Paul, Joseph S; Fu, Wai Chong; Dokos, Socrates; Box, Michael
2010-05-01
Far-field intensities of light scattered from a linear centro-symmetric array illuminated by a plane wave of incident light are estimated at a series of detector angles. The intensities are computed from the superposition of E-fields scattered by the individual array elements. An average scattering phase function is used to model the scattered fields of individual array elements. The nature of scattering from the array is investigated using an image (theta-phi plot) of the far-field intensities computed at a series of locations obtained by rotating the detector angle from 0 degrees to 360 degrees, corresponding to each angle of incidence in the interval [0 degrees 360 degrees]. The diffraction patterns observed from the theta-Phi plot are compared with those for isotropic scattering. In the absence of prior information on the array geometry, the intensities corresponding to theta-Phi pairs satisfying the Bragg condition are used to estimate the phase function. An algorithmic procedure is presented for this purpose and tested using synthetic data. The relative error between estimated and theoretical values of the phase function is shown to be determined by the mean spacing factor, the number of elements, and the far-field distance. An empirical relationship is presented to calculate the optimal far-field distance for a given specification of the percentage error.
New correction procedures for the fast field program which extend its range
NASA Technical Reports Server (NTRS)
West, M.; Sack, R. A.
1990-01-01
A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.
Moore, G. W.; Hutchins, G. M.; Miller, R. E.
1984-01-01
Computerized indexing and retrieval of medical records is increasingly important; but the use of natural language versus coded languages (SNOP, SNOMED) for this purpose remains controversial. In an effort to develop search strategies for natural language text, the authors examined the anatomic diagnosis reports by computer for 7000 consecutive autopsy subjects spanning a 13-year period at The Johns Hopkins Hospital. There were 923,657 words, 11,642 of them distinct. The authors observed an average of 1052 keystrokes, 28 lines, and 131 words per autopsy report, with an average 4.6 words per line and 7.0 letters per word. The entire text file represented 921 hours of secretarial effort. Words ranged in frequency from 33,959 occurrences of "and" to one occurrence for each of 3398 different words. Searches for rare diseases with unique names or for representative examples of common diseases were most readily performed with the use of computer-printed key word in context (KWIC) books. For uncommon diseases designated by commonly used terms (such as "cystic fibrosis"), needs were best served by a computerized search for logical combinations of key words. In an unbalanced word distribution, each conjunction (logical and) search should be performed in ascending order of word frequency; but each alternation (logical inclusive or) search should be performed in descending order of word frequency. Natural language text searches will assume a larger role in medical records analysis as the labor-intensive procedure of translation into a coded language becomes more costly, compared with the computer-intensive procedure of text searching. PMID:6546837
Calculating intensities using effective Hamiltonians in terms of Coriolis-adapted normal modes.
Karthikeyan, S; Krishnan, Mangala Sunder; Carrington, Tucker
2005-01-15
The calculation of rovibrational transition energies and intensities is often hampered by the fact that vibrational states are strongly coupled by Coriolis terms. Because it invalidates the use of perturbation theory for the purpose of decoupling these states, the coupling makes it difficult to analyze spectra and to extract information from them. One either ignores the problem and hopes that the effect of the coupling is minimal or one is forced to diagonalize effective rovibrational matrices (rather than diagonalizing effective rotational matrices). In this paper we apply a procedure, based on a quantum mechanical canonical transformation for deriving decoupled effective rotational Hamiltonians. In previous papers we have used this technique to compute energy levels. In this paper we show that it can also be applied to determine intensities. The ideas are applied to the ethylene molecule.
NASA Astrophysics Data System (ADS)
Bittencourt, Tulio N.; Barry, Ahmabou; Ingraffea, Anthony R.
This paper presents a comparison among stress-intensity factors for mixed-mode two-dimensional problems obtained through three different approaches: displacement correlation, J-integral, and modified crack-closure integral. All mentioned procedures involve only one analysis step and are incorporated in the post-processor page of a finite element computer code for fracture mechanics analysis (FRANC). Results are presented for a closed-form solution problem under mixed-mode conditions. The accuracy of these described methods then is discussed and analyzed in the framework of their numerical results. The influence of the differences among the three methods on the predicted crack trajectory of general problems is also discussed.
Conceptual Design Oriented Wing Structural Analysis and Optimization
NASA Technical Reports Server (NTRS)
Lau, May Yuen
1996-01-01
Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.
Polarization Imaging Apparatus with Auto-Calibration
NASA Technical Reports Server (NTRS)
Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)
2013-01-01
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.
Polarization imaging apparatus with auto-calibration
Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui
2013-08-20
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.
A strategy for selecting data mining techniques in metabolomics.
Banimustafa, Ahmed Hmaidan; Hardy, Nigel W
2012-01-01
There is a general agreement that the development of metabolomics depends not only on advances in chemical analysis techniques but also on advances in computing and data analysis methods. Metabolomics data usually requires intensive pre-processing, analysis, and mining procedures. Selecting and applying such procedures requires attention to issues including justification, traceability, and reproducibility. We describe a strategy for selecting data mining techniques which takes into consideration the goals of data mining techniques on the one hand, and the goals of metabolomics investigations and the nature of the data on the other. The strategy aims to ensure the validity and soundness of results and promote the achievement of the investigation goals.
Combined mine tremors source location and error evaluation in the Lubin Copper Mine (Poland)
NASA Astrophysics Data System (ADS)
Leśniak, Andrzej; Pszczoła, Grzegorz
2008-08-01
A modified method of mine tremors location used in Lubin Copper Mine is presented in the paper. In mines where an intensive exploration is carried out a high accuracy source location technique is usually required. The effect of the flatness of the geophones array, complex geological structure of the rock mass and intense exploitation make the location results ambiguous in such mines. In the present paper an effective method of source location and location's error evaluations are presented, combining data from two different arrays of geophones. The first consists of uniaxial geophones spaced in the whole mine area. The second is installed in one of the mining panels and consists of triaxial geophones. The usage of the data obtained from triaxial geophones allows to increase the hypocenter vertical coordinate precision. The presented two-step location procedure combines standard location methods: P-waves directions and P-waves arrival times. Using computer simulations the efficiency of the created algorithm was tested. The designed algorithm is fully non-linear and was tested on the multilayered rock mass model of the Lubin Copper Mine, showing a computational better efficiency than the traditional P-wave arrival times location algorithm. In this paper we present the complete procedure that effectively solves the non-linear location problems, i.e. the mine tremor location and measurement of the error propagation.
Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan
2007-05-01
In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.
Quantitative image analysis of immunohistochemical stains using a CMYK color model
Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W
2007-01-01
Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824
High-Performance Java Codes for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
NASA Astrophysics Data System (ADS)
Derkachov, G.; Jakubczyk, T.; Jakubczyk, D.; Archer, J.; Woźniak, M.
2017-07-01
Utilising Compute Unified Device Architecture (CUDA) platform for Graphics Processing Units (GPUs) enables significant reduction of computation time at a moderate cost, by means of parallel computing. In the paper [Jakubczyk et al., Opto-Electron. Rev., 2016] we reported using GPU for Mie scattering inverse problem solving (up to 800-fold speed-up). Here we report the development of two subroutines utilising GPU at data preprocessing stages for the inversion procedure: (i) A subroutine, based on ray tracing, for finding spherical aberration correction function. (ii) A subroutine performing the conversion of an image to a 1D distribution of light intensity versus azimuth angle (i.e. scattering diagram), fed from a movie-reading CPU subroutine running in parallel. All subroutines are incorporated in PikeReader application, which we make available on GitHub repository. PikeReader returns a sequence of intensity distributions versus a common azimuth angle vector, corresponding to the recorded movie. We obtained an overall ∼ 400 -fold speed-up of calculations at data preprocessing stages using CUDA codes running on GPU in comparison to single thread MATLAB-only code running on CPU.
NASA Astrophysics Data System (ADS)
Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela
2017-05-01
Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.
Analysis of positron lifetime spectra in polymers
NASA Technical Reports Server (NTRS)
Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.
1988-01-01
A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Experimental generation of Laguerre-Gaussian beam using digital micromirror device.
Ren, Yu-Xuan; Li, Ming; Huang, Kun; Wu, Jian-Guang; Gao, Hong-Fang; Wang, Zi-Qiang; Li, Yin-Mei
2010-04-01
A digital micromirror device (DMD) modulates laser intensity through computer control of the device. We experimentally investigate the performance of the modulation property of a DMD and optimize the modulation procedure through image correction. Furthermore, Laguerre-Gaussian (LG) beams with different topological charges are generated by projecting a series of forklike gratings onto the DMD. We measure the field distribution with and without correction, the energy of LG beams with different topological charges, and the polarization property in sequence. Experimental results demonstrate that it is possible to generate LG beams with a DMD that allows the use of a high-intensity laser with proper correction to the input images, and that the polarization state of the LG beam differs from that of the input beam.
Automatic detection of white-light flare kernels in SDO/HMI intensitygrams
NASA Astrophysics Data System (ADS)
Mravcová, Lucia; Švanda, Michal
2017-11-01
Solar flares with a broadband emission in the white-light range of the electromagnetic spectrum belong to most enigmatic phenomena on the Sun. The origin of the white-light emission is not entirely understood. We aim to systematically study the visible-light emission connected to solar flares in SDO/HMI observations. We developed a code for automatic detection of kernels of flares with HMI intensity brightenings and study properties of detected candidates. The code was tuned and tested and with a little effort, it could be applied to any suitable data set. By studying a few flare examples, we found indication that HMI intensity brightening might be an artefact of the simplified procedure used to compute HMI observables.
NASA Technical Reports Server (NTRS)
Chackerian, C., Jr.; Farrenq, R.; Guelachvili, G.; Rossetti, C.; Urban, W.
1984-01-01
Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least-squares fitting procedure to obtain the ground electronic state electric dipole moment function of carbon monoxide valid in the range of nuclear oscillation (0.87-1.91 A) of about the V = 38th vibrational level. Vibrational transition matrix elements are computed from this function for Delta V = 1, 2, 3 with V not more than 38.
Augmentation of Early Intensity Forecasting in Tropical Cyclones
2011-09-30
modeled storms to the measured signatures. APPROACH The deviation-angle variance technique was introduced in Pineros et al. (2008) as a procedure to...the algorithm developed in the first year of the project. The new method used best-track storm fixes as the points to compute the DAV signal. We...In the North Atlantic basin, RMSE for tropical storm category is 11 kt, hurricane categories 1-3 is 12.5 kt, category 4 is 18 kt and category 5 is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, Koichi, E-mail: matsumk2@cc.saga-u.ac.jp; Nojiri, Junichi; Takase, Yukinori
2007-06-15
We report a case of cerebral lipiodol embolism following transcatheter chemoembolization (TACE) for hepatocellular carcinoma. A 70-year-old woman with a large unresectable hepatocellular carcinoma underwent TACE. Her level of consciousness deteriorated after the procedure, and magnetic resonance imaging and non-contrast computed tomography revealed a cerebral lipiodol embolism. Despite intensive care, the patient died 2 weeks later. The complication might have been due to systemic-pulmonary shunts caused by previous surgeries and/or direct invasion of the recurrent tumor.
Fully anharmonic IR and Raman spectra of medium-size molecular systems: accuracy and interpretation†
Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien
2015-01-01
Computation of full infrared (IR) and Raman spectra (including absolute intensities and transition energies) for medium- and large-sized molecular systems beyond the harmonic approximation is one of the most interesting challenges of contemporary computational chemistry. Contrary to common beliefs, low-order perturbation theory is able to deliver results of high accuracy (actually often better than those issuing from current direct dynamics approaches) provided that anharmonic resonances are properly managed. This perspective sketches the recent developments in our research group toward the development a robust and user-friendly virtual spectrometer rooted into the second-order vibrational perturbation theory (VPT2) and usable also by non-specialists essentially as a black-box procedure. Several examples are explicitly worked out in order to illustrate the features of our computational tool together with the most important ongoing developments. PMID:24346191
Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang
2018-01-25
Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.
Intensity Conserving Spectral Fitting
NASA Technical Reports Server (NTRS)
Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.
2015-01-01
The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.
Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap
2016-01-01
The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.
Farace, P; Pontalti, R; Cristoforetti, L; Antolini, R; Scarpa, M
1997-11-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning.
Waveform distortion by 2-step modeling ground vibration from trains
NASA Astrophysics Data System (ADS)
Wang, F.; Chen, W.; Zhang, J.; Li, F.; Liu, H.; Chen, X.; Pan, Y.; Li, G.; Xiao, F.
2017-10-01
The 2-step procedure is widely used in numerical research on ground vibrations from trains. The ground is inconsistently represented by a simplified model in the first step and by a refined model in the second step, which may lead to distortions in the simulation results. In order to reveal this modeling error, time histories of ground-borne vibrations were computed based on the 2-step procedure and then compared with the results from a benchmark procedure of the whole system. All parameters involved were intentionally set as equal for the 2 methods, which ensures that differences in the results originated from the inconsistencies of the ground model. Excited by wheel loads of low speeds such as 60 km/h and low frequencies less than 8 Hz, the computed responses of the subgrade were quite close to the benchmarks. However, notable distortions were found in all loading cases at higher frequencies. Moreover, significant underestimation of intensity occurred when load frequencies equaled 16 Hz. This occurred not only at the subgrade but also at the points 10 m and 20 m away from the track. When the load speed was increased to 350 km/h, all computed waveforms were distorted, including the responses to the loads at very low frequencies. The modeling error found herein suggests that the ground models in the 2 steps should be calibrated in terms of frequency bands to be investigated, and the speed of train should be taken into account at the same time.
Optimized photonic gauge of extreme high vacuum with Petawatt lasers
NASA Astrophysics Data System (ADS)
Paredes, Ángel; Novoa, David; Tommasini, Daniele; Mas, Héctor
2014-03-01
One of the latest proposed applications of ultra-intense laser pulses is their possible use to gauge extreme high vacuum by measuring the photon radiation resulting from nonlinear Thomson scattering within a vacuum tube. Here, we provide a complete analysis of the process, computing the expected rates and spectra, both for linear and circular polarizations of the laser pulses, taking into account the effect of the time envelope in a slowly varying envelope approximation. We also design a realistic experimental configuration allowing for the implementation of the idea and compute the corresponding geometric efficiencies. Finally, we develop an optimization procedure for this photonic gauge of extreme high vacuum at high repetition rate Petawatt and multi-Petawatt laser facilities, such as VEGA, JuSPARC and ELI.
The binding domain of the HMGB1 inhibitor carbenoxolone: Theory and experiment
NASA Astrophysics Data System (ADS)
Mollica, Luca; Curioni, Alessandro; Andreoni, Wanda; Bianchi, Marco E.; Musco, Giovanna
2008-05-01
We present a combined computational and experimental study of the interaction of the Box A of the HMGB1 protein and carbenoxolone, an inhibitor of its pro-inflammatory activity. The computational approach consists of classical molecular dynamics (MD) simulations based on the GROMOS force field with quantum-refined (QRFF) atomic charges for the ligand. Experimental data consist of fluorescence intensities, chemical shift displacements, saturation transfer differences and intermolecular Nuclear Overhauser Enhancement signals. Good agreement is found between observations and the conformation of the ligand-protein complex resulting from QRFF-MD. In contrast, simple docking procedures and MD based on the unrefined force field provide models inconsistent with experiment. The ligand-protein binding is dominated by non-directional interactions.
HIFU procedures at moderate intensities—effect of large blood vessels
NASA Astrophysics Data System (ADS)
Hariharan, P.; Myers, M. R.; Banerjee, R. K.
2007-07-01
A three-dimensional computational model is presented for studying the efficacy of high-intensity focused ultrasound (HIFU) procedures targeted near large blood vessels. The analysis applies to procedures performed at intensities below the threshold for cavitation, boiling and highly nonlinear propagation, but high enough to increase tissue temperature a few degrees per second. The model is based upon the linearized KZK equation and the bioheat equation in tissue. In the blood vessel the momentum and energy equations are satisfied. The model is first validated in a tissue phantom, to verify the absence of bubble formation and nonlinear effects. Temperature rise and lesion-volume calculations are then shown for different beam locations and orientations relative to a large vessel. Both single and multiple ablations are considered. Results show that when the vessel is located within about a beam width (few mm) of the ultrasound beam, significant reduction in lesion volume is observed due to blood flow. However, for gaps larger than a beam width, blood flow has no major effect on the lesion formation. Under the clinically representative conditions considered, the lesion volume is reduced about 40% (relative to the no-flow case) when the beam is parallel to the blood vessel, compared to about 20% for a perpendicular orientation. Procedures involving multiple ablation sites are affected less by blood flow than single ablations. The model also suggests that optimally focused transducers can generate lesions that are significantly larger (>2 times) than the ones produced by highly focused beams.
2015-01-01
With ever-growing aging population and demand for denture treatments, pressure-induced mucosa lesion and residual ridge resorption remain main sources of clinical complications. Conventional denture design and fabrication are challenged for its labor and experience intensity, urgently necessitating an automatic procedure. This study aims to develop a fully automatic procedure enabling shape optimization and additive manufacturing of removable partial dentures (RPD), to maximize the uniformity of contact pressure distribution on the mucosa, thereby reducing associated clinical complications. A 3D heterogeneous finite element (FE) model was constructed from CT scan, and the critical tissue of mucosa was modeled as a hyperelastic material from in vivo clinical data. A contact shape optimization algorithm was developed based on the bi-directional evolutionary structural optimization (BESO) technique. Both initial and optimized dentures were prototyped by 3D printing technology and evaluated with in vitro tests. Through the optimization, the peak contact pressure was reduced by 70%, and the uniformity was improved by 63%. In vitro tests verified the effectiveness of this procedure, and the hydrostatic pressure induced in the mucosa is well below clinical pressure-pain thresholds (PPT), potentially lessening risk of residual ridge resorption. This proposed computational optimization and additive fabrication procedure provides a novel method for fast denture design and adjustment at low cost, with quantitative guidelines and computer aided design and manufacturing (CAD/CAM) for a specific patient. The integration of digitalized modeling, computational optimization, and free-form fabrication enables more efficient clinical adaptation. The customized optimal denture design is expected to minimize pain/discomfort and potentially reduce long-term residual ridge resorption. PMID:26161878
Carroll, Diane L
2014-01-01
In a growing number of requests, family members are asking for proximity to their family member during resuscitation and invasive procedures. The objective of this study was to measure the impact of intensive care unit environments on nurse perception of family presence during resuscitation and invasive procedures. The study used a descriptive survey design with nurses from 9 intensive care units using the Family Presence Self-confidence Scale for resuscitation/invasive procedures that measures nurses' perception of self-confidence and Family Presence Risk-Benefit Scale for resuscitation and invasive procedures that measures nurses' perception of risks/benefits related to managing resuscitation and invasive procedures with family present. There were 207 nurses who responded: 14 male and 184 female nurses (9 missing data), with mean age of 41 ± 11 years, with a mean of 15 years in critical care practice. The environments were defined as surgical (n = 68), medical (n = 43), pediatric/neonatal (n = 34), and mixed adult medical/surgical (n = 36) intensive care units. There were significant differences in self-confidence, with medical and pediatric intensive care unit nurses rating more self-confidence for family presence during resuscitation (F = 7.73, P < .000) and invasive procedures (F = 6.41, P < .000). There were significant differences in risks/benefits with medical and pediatric intensive care unit nurses rating lower risk and higher benefit for resuscitation (F = 7.73, P < .000). Perceptions of family presence were significantly higher for pediatric and medical intensive care unit nurses. Further education and support may be needed in the surgical and mixed intensive care units. Evidence-based practice guidelines that are family centered can define the procedures and resources for family presence, to ultimately promote professional practice.
High Intensity Focused Ultrasound Ablation of Pancreatic Neuroendocrine Tumours: Report of Two Cases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orgera, Gianluigi, E-mail: gianluigi.orgera@ieo.it; Krokidis, Miltiadis; Monfardini, Lorenzo
2011-04-15
We describe the use of ultrasound-guided high-intensity focused ultrasound (HIFU) for ablation of two pancreatic neuroendocrine tumours (NETs; insulinomas) in two inoperable young female patients. Both suffered from episodes of severe nightly hypoglycemia that was not efficiently controlled by medical treatment. After HIFU ablation, local disease control and symptom relief were achieved without postinterventional complications. The patients remained free of symptoms during 9-month follow-up. The lesions appeared to be decreased in volume, and there was decreased enhancing pattern in the multidetector computed tomography control (MDCT). HIFU is likely to be a valid alternative for symptoms control in patients with pancreaticmore » NETs. However, currently the procedure should be reserved for inoperable patients for whom symptoms cannot be controlled by medical therapy.« less
Probabilistic structural mechanics research for parallel processing computers
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.
1991-01-01
Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.
Computational attributes of the integral form of the equation of transfer
NASA Technical Reports Server (NTRS)
Frankel, J. I.
1991-01-01
Difficulties can arise in radiative and neutron transport calculations when a highly anisotropic scattering phase function is present. In the presence of anisotropy, currently used numerical solutions are based on the integro-differential form of the linearized Boltzmann transport equation. This paper, departs from classical thought and presents an alternative numerical approach based on application of the integral form of the transport equation. Use of the integral formalism facilitates the following steps: a reduction in dimensionality of the system prior to discretization, the use of symbolic manipulation to augment the computational procedure, and the direct determination of key physical quantities which are derivable through the various Legendre moments of the intensity. The approach is developed in the context of radiative heat transfer in a plane-parallel geometry, and results are presented and compared with existing benchmark solutions. Encouraging results are presented to illustrate the potential of the integral formalism for computation. The integral formalism appears to possess several computational attributes which are well-suited to radiative and neutron transport calculations.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.
Carmichael, Owen; Sakhanenko, Lyudmila
2015-05-15
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data
Carmichael, Owen; Sakhanenko, Lyudmila
2015-01-01
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Leibovici, Vera; Magora, Florella; Cohen, Sarale; Ingber, Arieh
2009-01-01
BACKGROUND: Virtual reality immersion (VRI), an advanced computer-generated technique, decreased subjective reports of pain in experimental and procedural medical therapies. Furthermore, VRI significantly reduced pain-related brain activity as measured by functional magnetic resonance imaging. Resemblance between anatomical and neuroendocrine pathways of pain and pruritus may prove VRI to be a suitable adjunct for basic and clinical studies of the complex aspects of pruritus. OBJECTIVES: To compare effects of VRI with audiovisual distraction (AVD) techniques for attenuation of pruritus in patients with atopic dermatitis and psoriasis vulgaris. METHODS: Twenty-four patients suffering from chronic pruritus – 16 due to atopic dermatitis and eight due to psoriasis vulgaris – were randomly assigned to play an interactive computer game using a special visor or a computer screen. Pruritus intensity was self-rated before, during and 10 min after exposure using a visual analogue scale ranging from 0 to 10. The interviewer rated observed scratching on a three-point scale during each distraction program. RESULTS: Student’s t tests were significant for reduction of pruritus intensity before and during VRI and AVD (P=0.0002 and P=0.01, respectively) and were significant only between ratings before and after VRI (P=0.017). Scratching was mostly absent or mild during both programs. CONCLUSIONS: VRI and AVD techniques demonstrated the ability to diminish itching sensations temporarily. Further studies on the immediate and late effects of interactive computer distraction techniques to interrupt itching episodes will open potential paths for future pruritus research. PMID:19714267
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johanna H Oxstrand; Katya L Le Blanc
The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less
Inside-out: comparing internally generated and externally generated basic emotions.
Salas, Christian E; Radovic, Darinka; Turnbull, Oliver H
2012-06-01
A considerable number of mood induction (MI) procedures have been developed to elicit emotion in normal and clinical populations. Although external procedures (e.g., film clips, pictures) are widely used, a number of experiments elicit emotion by using self-generated procedures (e.g., recalling an emotional personal episode). However, no study has directly compared the effectiveness of two types of internal versus external MI across multiple discrete emotions. In the present experiment, 40 undergraduate students watched film clips (external procedure) and recalled personal events (internal procedure) inducing 4 basic emotions (fear, anger, joy, sadness) and later completed a self-report questionnaire. Remarkably, both internal and external procedures elicited target emotions selectively, compared with nontarget emotions. When contrasting the intensity of target emotions, both techniques showed no significant differences, with the exception of Joy, which was more intensely elicited by the internal procedure. Importantly, when considering the overall level of intensity, it was always greater in the internal procedure, for each stimulus. A more detailed investigation of the data suggest that recalling personal events (a type of internal procedure) generates more negative and mixed blends of emotions, which might account for the overall higher intensity of the internal mood induction.
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
Assessment of Cracks in Stress Concentration Regions with Localized Plastic Zones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, E.
1998-11-25
Marty brittle fracture evaluation procedures include plasticity corrections to elastically computed stress intensity factors. These corrections, which are based on the existence of a plastic zone in the vicinity of the crack tip, can overestimate the plasticity effect for a crack embedded in a stress concentration region in which the elastically computed stress exceeds the yield strength of the material in a localized zone. The interactions between the crack, which acts to relieve the high stresses driving the crack, plasticity effects in the stress concentration region, and the nature and source of the loading are examined by formulating explicit flawmore » finite element models for a crack emanating from the root of a notch located in a panel subject to an applied tensile stress. The results of these calculations provide conditions under which a crack-tip plasticity correction based on the Irwin plastic zone size overestimates the plasticity effect. A failure assessment diagram (FAD) curve is used to characterize the effect of plasticity on the crack driving force and to define a less restrictive plasticity correction for cracks at notch roots when load-controlled boundary conditions are imposed. The explicit flaw finite element results also demonstrate that stress intensity factors associated with load-controlled boundary conditions, such as those inherent in the ASME Boiler and Pressure Vessel Code as well as in most handbooks of stress intensity factors, can be much higher than those associated with displacement-controlled conditions, such as those that produce residual or thermal stresses. Under certain conditions, the inclusion of plasticity effects for cracks loaded by displacement-controlled boundary conditions reduces the crack driving force thus justifying the elimination of a plasticity correction for such loadings. The results of this study form the basis for removing unnecessary conservatism from flaw evaluation procedures that utilize plasticity corrections.« less
NASA Astrophysics Data System (ADS)
Diana, Michele
2016-03-01
Pre-anastomotic bowel perfusion is a key factor for a successful healing process. Clinical judgment has limited accuracy to evaluate intestinal microperfusion. Fluorescence videography is a promising tool for image-guided intraoperative assessment of the bowel perfusion at the future anastomotic site in the setting of minimally invasive procedures. The standard configuration for fluorescence videography includes a Near-Infrared endoscope able to detect the signal emitted by a fluorescent dye, more frequently Indocyanine Green (ICG), which is administered by intravenous injection. Fluorescence intensity is proportional to the amount of fluorescent dye diffusing in the tissue and consequently is a surrogate marker of tissue perfusion. However, fluorescence intensity alone remains a subjective approach and an integrated computer-based analysis of the over-time evolution of the fluorescence signal is required to obtain quantitative data. We have developed a solution integrating computer-based analysis for intra-operative evaluation of the optimal resection site, based on the bowel perfusion as determined by the dynamic fluorescence intensity. The software can generate a "virtual perfusion cartography", based on the "fluorescence time-to-peak". The virtual perfusion cartography can be overlapped onto real-time laparoscopic images to obtain the Enhanced Reality effect. We have defined this approach FLuorescence-based Enhanced Reality (FLER). This manuscript describes the stepwise development of the FLER concept.
Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy
2016-12-01
Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
NASA Technical Reports Server (NTRS)
Kvaternik, R. G.
1975-01-01
Two computational procedures for analyzing complex structural systems for their natural modes and frequencies of vibration are presented. Both procedures are based on a substructures methodology and both employ the finite-element stiffness method to model the constituent substructures. The first procedure is a direct method based on solving the eigenvalue problem associated with a finite-element representation of the complete structure. The second procedure is a component-mode synthesis scheme in which the vibration modes of the complete structure are synthesized from modes of substructures into which the structure is divided. The analytical basis of the methods contains a combination of features which enhance the generality of the procedures. The computational procedures exhibit a unique utilitarian character with respect to the versatility, computational convenience, and ease of computer implementation. The computational procedures were implemented in two special-purpose computer programs. The results of the application of these programs to several structural configurations are shown and comparisons are made with experiment.
Application of microarray analysis on computer cluster and cloud platforms.
Bernau, C; Boulesteix, A-L; Knaus, J
2013-01-01
Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.
A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows
NASA Technical Reports Server (NTRS)
Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert
1996-01-01
The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.
Patients' perceptions and responses to procedural pain: results from Thunder Project II.
Puntillo, K A; White, C; Morris, A B; Perdue, S T; Stanik-Hutt, J; Thompson, C L; Wild, L R
2001-07-01
Little is known about the painfulness of procedures commonly performed in acute and critical care settings. To describe pain associated with turning, wound drain removal, tracheal suctioning, femoral catheter removal, placement of a central venous catheter, and nonburn wound dressing change and frequency of use of analgesics during procedures. A comparative, descriptive design was used. Numeric rating scales were used to measure pain intensity and procedural distress; word lists, to measure pain quality. Data were obtained from 6201 patients: 176 younger than 18 years and 5957 adults. Mean pain intensity scores for turning and tracheal suctioning were 2.80 and 3.00, respectively (scale, 0-5), for 4- to 7-year-olds and 52.0 and 28.1 (scale, 0-100) for 8- to 12-year-olds. For adolescents, mean pain intensity scores for wound dressing change, turning, tracheal suctioning, and wound drain removal were 5 to 7 (scale, 0-10); mean procedural distress scores were 4.83 to 6.00 (scale, 0-10). In adults, mean pain intensity scores for all procedures were 2.65 to 4.93 (scale, 0-10); mean procedural distress scores were 1.89 to 3.47 (scale, 0-10). The most painful and distressing procedures were turning for adults and wound care for adolescents. Procedural pain was often described as sharp, stinging, stabbing, shooting, and awful. Less than 20% of patients received opiates before procedures. Procedural pain varies considerably and is procedure specific. Because procedures are performed so often, more individualized attention to preparation for and control of procedural pain is warranted.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
Applications in Data-Intensive Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.
2010-04-01
This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains
NASA Astrophysics Data System (ADS)
Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo
2016-09-01
The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.
Automating software design system DESTA
NASA Technical Reports Server (NTRS)
Lovitsky, Vladimir A.; Pearce, Patricia D.
1992-01-01
'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.
A novel parallel architecture for local histogram equalization
NASA Astrophysics Data System (ADS)
Ohannessian, Mesrob I.; Choueiter, Ghinwa F.; Diab, Hassan
2005-07-01
Local histogram equalization is an image enhancement algorithm that has found wide application in the pre-processing stage of areas such as computer vision, pattern recognition and medical imaging. The computationally intensive nature of the procedure, however, is a main limitation when real time interactive applications are in question. This work explores the possibility of performing parallel local histogram equalization, using an array of special purpose elementary processors, through an HDL implementation that targets FPGA or ASIC platforms. A novel parallelization scheme is presented and the corresponding architecture is derived. The algorithm is reduced to pixel-level operations. Processing elements are assigned image blocks, to maintain a reasonable performance-cost ratio. To further simplify both processor and memory organizations, a bit-serial access scheme is used. A brief performance assessment is provided to illustrate and quantify the merit of the approach.
Computational approaches to standard-compliant biofilm data for reliable analysis and integration.
Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália
2012-12-01
The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).
Computational approaches to standard-compliant biofilm data for reliable analysis and integration.
Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália
2012-07-24
The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).
Situation awareness and trust in computer-based procedures in nuclear power plant operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Throneburg, E. B.; Jones, J. M.
2006-07-01
Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presentedmore » that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)« less
Decoding 2D-PAGE complex maps: relevance to proteomics.
Pietrogrande, Maria Chiara; Marchetti, Nicola; Dondi, Francesco; Righetti, Pier Giorgio
2006-03-20
This review describes two mathematical approaches useful for decoding the complex signal of 2D-PAGE maps of protein mixtures. These methods are helpful for interpreting the large amount of data of each 2D-PAGE map by extracting all the analytical information hidden therein by spot overlapping. Here the basic theory and application to 2D-PAGE maps are reviewed: the means for extracting information from the experimental data and their relevance to proteomics are discussed. One method is based on the quantitative theory of statistical model of peak overlapping (SMO) using the spot experimental data (intensity and spatial coordinates). The second method is based on the study of the 2D-autocovariance function (2D-ACVF) computed on the experimental digitised map. They are two independent methods that are able to extract equal and complementary information from the 2D-PAGE map. Both methods permit to obtain fundamental information on the sample complexity and the separation performance and to single out ordered patterns present in spot positions: the availability of two independent procedures to compute the same separation parameters is a powerful tool to estimate the reliability of the obtained results. The SMO procedure is an unique tool to quantitatively estimate the degree of spot overlapping present in the map, while the 2D-ACVF method is particularly powerful in simply singling out the presence of order in the spot position from the complexity of the whole 2D map, i.e., spot trains. The procedures were validated by extensive numerical computation on computer-generated maps describing experimental 2D-PAGE gels of protein mixtures. Their applicability to real samples was tested on reference maps obtained from literature sources. The review describes the most relevant information for proteomics: sample complexity, separation performance, overlapping extent, identification of spot trains related to post-translational modifications (PTMs).
Near Real-Time Imaging of the Galactic Plane with BATSE
NASA Technical Reports Server (NTRS)
Harmon, B. A.; Zhang, S. N.; Robinson, C. R.; Paciesas, W. S.; Barret, D.; Grindlay, J.; Bloser, P.; Monnelly, C.
1997-01-01
The discovery of new transient or persistent sources in the hard X-ray regime with the BATSE Earth occultation Technique has been limited previously to bright sources of about 200 mCrab or more. While monitoring known source locations is not a problem to a daily limiting sensitivity of about 75 mCrab, the lack of a reliable background model forces us to use more intensive computer techniques to find weak, previously unknown emission from hard X-ray/gamma sources. The combination of Radon transform imaging of the galactic plane in 10 by 10 degree fields and the Harvard/CFA-developed Image Search (CBIS) allows us to straightforwardly search the sky for candidate sources in a +/- 20 degree latitude band along the plane. This procedure has been operating routinely on a weekly basis since spring 1997. We briefly describe the procedure, then concentrate on the performance aspects of the technique and candidate source results from the search.
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation
NASA Astrophysics Data System (ADS)
Hunsche, Stefan; Sauner, Dieter; El Majdoub, Faycal; Neudorfer, Clemens; Poggenborg, Jörg; Goßmann, Axel; Maarouf, Mohammad
2017-03-01
Intraoperative assessment of lead localization has become a standard procedure during deep brain stimulation surgery in many centers, allowing immediate verification of targeting accuracy and, if necessary, adjustment of the trajectory. The most suitable imaging modality to determine lead positioning, however, remains controversially discussed. Current approaches entail the implementation of computed tomography and magnetic resonance imaging. In the present study, we adopted the technique of intensity-based 2D 3D registration that is commonly employed in stereotactic radiotherapy and spinal surgery. For this purpose, intraoperatively acquired 2D x-ray images were fused with preoperative 3D computed tomography (CT) data to verify lead placement during stereotactic robot assisted surgery. Accuracy of lead localization determined from 2D 3D registration was compared to conventional 3D 3D registration in a subsequent patient study. The mean Euclidian distance of lead coordinates estimated from intensity-based 2D 3D registration versus flat-panel detector CT 3D 3D registration was 0.7 mm ± 0.2 mm. Maximum values of these distances amounted to 1.2 mm. To further investigate 2D 3D registration a simulation study was conducted, challenging two observers to visually assess artificially generated 2D 3D registration errors. 95% of deviation simulations, which were visually assessed as sufficient, had a registration error below 0.7 mm. In conclusion, 2D 3D intensity-based registration revealed high accuracy and reliability during robot guided stereotactic neurosurgery and holds great potential as a low dose, cost effective means for intraoperative lead localization.
NASA Technical Reports Server (NTRS)
Akers, James C.; Cooper, Beth A.
2004-01-01
NASA Glenn Research Center's Acoustical Testing Laboratory (ATL) provides a comprehensive array of acoustical testing services, including sound pressure level, sound intensity level, and sound-power-level testing per International Standards Organization (ISO)1 3744. Since its establishment in September 2000, the ATL has provided acoustic emission testing and noise control services for a variety of customers, particularly microgravity space flight hardware that must meet International Space Station acoustic emission requirements. The ATL consists of a 23- by 27- by 20-ft (height) convertible hemi/anechoic test chamber and a separate sound-attenuating test support enclosure. The ATL employs a personal-computer-based data acquisition system that provides up to 26 channels of simultaneous data acquisition with real-time analysis (ref. 4). Specialized diagnostic tools, including a scanning sound-intensity system, allow the ATL's technical staff to support its clients' aggressive low-noise design efforts to meet the space station's acoustic emission requirement. From its inception, the ATL has pursued the goal of developing a comprehensive ISO 17025-compliant quality program that would incorporate Glenn's existing ISO 9000 quality system policies as well as ATL-specific technical policies and procedures. In March 2003, the ATL quality program was awarded accreditation by the National Voluntary Laboratory Accreditation Program (NVLAP) for sound-power-level testing in accordance with ISO 3744. The NVLAP program is administered by the National Institutes of Standards and Technology (NIST) of the U.S. Department of Commerce and provides third-party accreditation for testing and calibration laboratories. There are currently 24 NVLAP-accredited acoustical testing laboratories in the United States. NVLAP accreditation covering one or more specific testing procedures conducted in accordance with established test standards is awarded upon successful completion of an intensive onsite assessment that includes proficiency testing and documentation review. The ATL NVLAP accreditation currently applies specifically to its ISO 3744 soundpower- level determination procedure (see the photograph) and supporting ISO 17025 quality system, although all ATL operations are conducted in accordance with its quality system. The ATL staff is currently developing additional procedures to adapt this quality system to the testing of space flight hardware in accordance with International Space Station acoustic emission requirements.<
NASA Astrophysics Data System (ADS)
Yi, Dake; Wang, TzuChiang
2018-06-01
In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.
NASA Technical Reports Server (NTRS)
Fox, L., III (Principal Investigator); Mayer, K. E.
1980-01-01
A teaching module on image classification procedures using the VICAR computer software package was developed to optimize the training benefits for users of the VICAR programs. The field test of the module is discussed. An intensive forest land inventory strategy was developed for Humboldt County. The results indicate that LANDSAT data can be computer classified to yield site specific forest resource information with high accuracy (82%). The "Douglas-fir 80%" category was found to cover approximately 21% of the county and "Mixed Conifer 80%" covering about 13%. The "Redwood 80%" resource category, which represented dense old growth trees as well as large second growth, comprised 4.0% of the total vegetation mosaic. Furthermore, the "Brush" and "Brush-Regeneration" categories were found to be a significant part of the vegetative community, with area estimates of 9.4 and 10.0%.
Creating CAD designs and performing their subsequent analysis using opensource solutions in Python
NASA Astrophysics Data System (ADS)
Iakushkin, Oleg O.; Sedova, Olga S.
2018-01-01
The paper discusses the concept of a system that encapsulates the transition from geometry building to strength tests. The solution we propose views the engineer as a programmer who is capable of coding the procedure for working with the modeli.e., to outline the necessary transformations and create cases for boundary conditions. We propose a prototype of such system. In our work, we used: Python programming language to create the program; Jupyter framework to create a single workspace visualization; pythonOCC library to implement CAD; FeniCS library to implement FEM; GMSH and VTK utilities. The prototype is launched on a platform which is a dynamically expandable multi-tenant cloud service providing users with all computing resources on demand. However, the system may be deployed locally for prototyping or work that does not involve resource-intensive computing. To make it possible, we used containerization, isolating the system in a Docker container.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Nozza, R J
1987-06-01
Performance of infants in a speech-sound discrimination task (/ba/ vs /da/) was measured at three stimulus intensity levels (50, 60, and 70 dB SPL) using the operant head-turn procedure. The procedure was modified so that data could be treated as though from a single-interval (yes-no) procedure, as is commonly done, as well as if from a sustained attention (vigilance) task. Discrimination performance changed significantly with increase in intensity, suggesting caution in the interpretation of results from infant discrimination studies in which only single stimulus intensity levels within this range are used. The assumptions made about the underlying methodological model did not change the performance-intensity relationships. However, infants demonstrated response decrement, typical of vigilance tasks, which supports the notion that the head-turn procedure is represented best by the vigilance model. Analysis then was done according to a method designed for tasks with undefined observation intervals [C. S. Watson and T. L. Nichols, J. Acoust. Soc. Am. 59, 655-668 (1976)]. Results reveal that, while group data are reasonably well represented across levels of difficulty by the fixed-interval model, there is a variation in performance as a function of time following trial onset that could lead to underestimation of performance in some cases.
ERIC Educational Resources Information Center
Buche, Mari W.; Davis, Larry R.; Vician, Chelley
2007-01-01
Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J.S.; Gordon, R.L.; Lessor, D.L.
1981-08-01
Alternate measurement and data analysis procedures are discussed and compared for the application of reflective Nomarski differential interference contrast microscopy for the determination of surface slopes. The discussion includes the interpretation of a previously reported iterative procedure using the results of a detailed optical model and the presentation of a new procedure based on measured image intensity extrema. Surface slope determinations from these procedures are presented and compared with results from a previously reported curve fit analysis of image intensity data. The accuracy and advantages of the different procedures are discussed.
Translating research into practice. Implications of the Thunder Project II.
Thompson, C L; White, C; Wild, L R; Morris, A B; Perdue, S T; Stanik-Hutt, J; Puntillo, K A
2001-12-01
The Thunder Project II study described procedural pain in a variety of acute and critical care settings. The procedures studied were turning, tracheal suctioning, wound drain removal, nonburn wound dressing change, femoral sheath removal, and central venous catheter insertion. Turning had the highest mean pain intensity, whereas femoral sheath removal and central venous catheter insertion had the least pain intensity in adults. Nonwound dressing change had the highest pain intensity for teenagers. Pain occurred in procedures that are often repeated several times a day as well as in those that may be single events. There is a wide range of pain responses to any of these procedures; as a result, standardized and thoughtful pain, and distress assessments are warranted. Planning of care, including the use of preemptive analgesic interventions, needs to be individualized. Future studies are needed to describe patient responses to other commonly performed nursing procedures and to identify effective interventions for reducing procedural pain and distress.
48 CFR 552.216-72 - Placement of Orders.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...
48 CFR 552.216-72 - Placement of Orders.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...
48 CFR 552.216-72 - Placement of Orders.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...
48 CFR 552.216-72 - Placement of Orders.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...
48 CFR 552.216-72 - Placement of Orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...
NASA Technical Reports Server (NTRS)
Tanner, John A.
1996-01-01
A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.
NASA Technical Reports Server (NTRS)
Tanner, John A.
1996-01-01
A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical-loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.
Computer program for the computation of total sediment discharge by the modified Einstein procedure
Stevens, H.H.
1985-01-01
Two versions of a computer program to compute total sediment discharge by the modified Einstein procedure are presented. The FORTRAN 77 language version is for use on the PRIME computer, and the BASIC language version is for use on most microcomputers. The program contains built-in limitations and input-output options that closely follow the original modified Einstein procedure. Program documentation and listings of both versions of the program are included. (USGS)
Zhu, Da-Jian; Chen, Xiao-Wu; OuYang, Man-Zhao; Lu, Yan
2016-01-12
Complete mesocolic excision provides a correct anatomical plane for colon cancer surgery. However, manifestation of the surgical plane during laparoscopic complete mesocolic excision versus in computed tomography images remains to be examined. Patients who underwent laparoscopic complete mesocolic excision for right-sided colon cancer underwent an abdominal computed tomography scan. The spatial relationship of the intraoperative surgical planes were examined, and then computed tomography reconstruction methods were applied. The resulting images were analyzed. In 44 right-sided colon cancer patients, the surgical plane for laparoscopic complete mesocolic excision was found to be composed of three surgical planes that were identified by computed tomography imaging with cross-sectional multiplanar reconstruction, maximum intensity projection, and volume reconstruction. For the operations performed, the mean bleeding volume was 73±32.3 ml and the mean number of harvested lymph nodes was 22±9.7. The follow-up period ranged from 6-40 months (mean 21.2), and only two patients had distant metastases. The laparoscopic complete mesocolic excision surgical plane for right-sided colon cancer is composed of three surgical planes. When these surgical planes were identified, laparoscopic complete mesocolic excision was a safe and effective procedure for the resection of colon cancer.
Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure
ERIC Educational Resources Information Center
Rodriguez, Gabriel; Alonso, Gumersinda
2004-01-01
An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…
NASA Astrophysics Data System (ADS)
Ranjan, Srikant
2005-11-01
Fatigue-induced failures in aircraft gas turbine and rocket engine turbopump blades and vanes are a pervasive problem. Turbine blades and vanes represent perhaps the most demanding structural applications due to the combination of high operating temperature, corrosive environment, high monotonic and cyclic stresses, long expected component lifetimes and the enormous consequence of structural failure. Single crystal nickel-base superalloy turbine blades are being utilized in rocket engine turbopumps and jet engines because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. These materials have orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. Computation of stress intensity factors (SIFs) and the ability to model fatigue crack growth rate at single crystal cracks subject to mixed-mode loading conditions are important parts of developing a mechanistically based life prediction for these complex alloys. A general numerical procedure has been developed to calculate SIFs for a crack in a general anisotropic linear elastic material subject to mixed-mode loading conditions, using three-dimensional finite element analysis (FEA). The procedure does not require an a priori assumption of plane stress or plane strain conditions. The SIFs KI, KII, and KIII are shown to be a complex function of the coupled 3D crack tip displacement field. A comprehensive study of variation of SIFs as a function of crystallographic orientation, crack length, and mode-mixity ratios is presented, based on the 3D elastic orthotropic finite element modeling of tensile and Brazilian Disc (BD) specimens in specific crystal orientations. Variation of SIF through the thickness of the specimens is also analyzed. The resolved shear stress intensity coefficient or effective SIF, Krss, can be computed as a function of crack tip SIFs and the resolved shear stress on primary slip planes. The maximum value of Krss and DeltaKrss was found to determine the crack growth direction and the fatigue crack growth rate respectively. The fatigue crack driving force parameter, DeltaK rss, forms an important multiaxial fatigue damage parameter that can be used to predict life in superalloy components.
Simplified methods for computing total sediment discharge with the modified Einstein procedure
Colby, Bruce R.; Hubbell, David Wellington
1961-01-01
A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.
Spacecraft crew procedures from paper to computers
NASA Technical Reports Server (NTRS)
Oneal, Michael; Manahan, Meera
1991-01-01
Described here is a research project that uses human factors and computer systems knowledge to explore and help guide the design and creation of an effective Human-Computer Interface (HCI) for spacecraft crew procedures. By having a computer system behind the user interface, it is possible to have increased procedure automation, related system monitoring, and personalized annotation and help facilities. The research project includes the development of computer-based procedure system HCI prototypes and a testbed for experiments that measure the effectiveness of HCI alternatives in order to make design recommendations. The testbed will include a system for procedure authoring, editing, training, and execution. Progress on developing HCI prototypes for a middeck experiment performed on Space Shuttle Mission STS-34 and for upcoming medical experiments are discussed. The status of the experimental testbed is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a modelmore » is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.« less
Véliz, Pedro L; Berra, Esperanza M; Jorna, Ana R
2015-07-01
INTRODUCTION Medical specialties' core curricula should take into account functions to be carried out, positions to be filled and populations to be served. The functions in the professional profile for specialty training of Cuban intensive care and emergency medicine specialists do not include all the activities that they actually perform in professional practice. OBJECTIVE Define the specific functions and procedural skills required of Cuban specialists in intensive care and emergency medicine. METHODS The study was conducted from April 2011 to September 2013. A three-stage methodological strategy was designed using qualitative techniques. By purposive maximum variation sampling, 82 professionals were selected. Documentary analysis and key informant criteria were used in the first stage. Two expert groups were formed in the second stage: one used various group techniques (focus group, oral and written brainstorming) and the second used a three-round Delphi method. In the final stage, a third group of experts was questioned in semistructured in-depth interviews, and a two-round Delphi method was employed to assess priorities. RESULTS Ultimately, 78 specific functions were defined: 47 (60.3%) patient care, 16 (20.5%) managerial, 6 (7.7%) teaching, and 9 (11.5%) research. Thirty-one procedural skills were identified. The specific functions and procedural skills defined relate to the profession's requirements in clinical care of the critically ill, management of patient services, teaching and research at the specialist's different occupational levels. CONCLUSIONS The specific functions and procedural skills required of intensive care and emergency medicine specialists were precisely identified by a scientific method. This product is key to improving the quality of teaching, research, administration and patient care in this specialty in Cuba. The specific functions and procedural skills identified are theoretical, practical, methodological and social contributions to inform future curricular reform and to help intensive care specialists enhance their performance in comprehensive patient care. KEYWORDS Intensive care, urgent care, emergency medicine, continuing medical education, curriculum, diagnostic techniques and procedures, medical residency, Cuba.
NASA Technical Reports Server (NTRS)
Hetsch, J.
1983-01-01
Intensity distributions in nonoptical wave fields can be visualized and stored on photosensitive material. In the case of microwaves, temperature effects can be utilized with the aid of liquid crystals to visualize intensity distributions. Particular advantages for the study of intensity distributions in microwave fields presents a scanning procedure in which a microcomputer is employed for the control of a probe and the storage of the measured data. The present investigation is concerned with the employment of such a scanning procedure for the recording and the reproduction of microwave holograms. The scanning procedure makes use of an approach discussed by Farhat, et al. (1973). An eight-bit microprocessor with 64 kBytes of RAM is employed together with a diskette storage system.
NASA Astrophysics Data System (ADS)
Bismuth, Vincent; Vancamberg, Laurence; Gorges, Sébastien
2009-02-01
During interventional radiology procedures, guide-wires are usually inserted into the patients vascular tree for diagnosis or healing purpose. These procedures are monitored with an Xray interventional system providing images of the interventional devices navigating through the patient's body. The automatic detection of such tools by image processing means has gained maturity over the past years and enables applications ranging from image enhancement to multimodal image fusion. Sophisticated detection methods are emerging, which rely on a variety of device enhancement techniques. In this article we reviewed and classified these techniques into three families. We chose a state of the art approach in each of them and built a rigorous framework to compare their detection capability and their computational complexity. Through simulations and the intensive use of ROC curves we demonstrated that the Hessian based methods are the most robust to strong curvature of the devices and that the family of rotated filters technique is the most suited for detecting low CNR and low curvature devices. The steerable filter approach demonstrated less interesting detection capabilities and appears to be the most expensive one to compute. Finally we demonstrated the interest of automatic guide-wire detection on a clinical topic: the compensation of respiratory motion in multimodal image fusion.
Practical implementation of tetrahedral mesh reconstruction in emission tomography
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2014-01-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373
Practical implementation of tetrahedral mesh reconstruction in emission tomography
NASA Astrophysics Data System (ADS)
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2013-05-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.
Data intensive computing at Sandia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew T.
2010-09-01
Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.
CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction
Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.
2012-01-01
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638
Development of Comprehensive Reduced Kinetic Models for Supersonic Reacting Shear Layer Simulations
NASA Technical Reports Server (NTRS)
Zambon, A. C.; Chelliah, H. K.; Drummond, J. P.
2006-01-01
Large-scale simulations of multi-dimensional unsteady turbulent reacting flows with detailed chemistry and transport can be computationally extremely intensive even on distributed computing architectures. With the development of suitable reduced chemical kinetic models, the number of scalar variables to be integrated can be decreased, leading to a significant reduction in the computational time required for the simulation with limited loss of accuracy in the results. A general MATLAB-based automated mechanism reduction procedure is presented to reduce any complex starting mechanism (detailed or skeletal) with minimal human intervention. Based on the application of the quasi steady-state (QSS) approximation for certain chemical species and on the elimination of the fast reaction rates in the mechanism, several comprehensive reduced models, capable of handling different fuels such as C2H4, CH4 and H2, have been developed and thoroughly tested for several combustion problems (ignition, propagation and extinction) and physical conditions (reactant compositions, temperatures, and pressures). A key feature of the present reduction procedure is the explicit solution of the concentrations of the QSS species, needed for the evaluation of the elementary reaction rates. In contrast, previous approaches relied on an implicit solution due to the strong coupling between QSS species, requiring computationally expensive inner iterations. A novel algorithm, based on the definition of a QSS species coupling matrix, is presented to (i) introduce appropriate truncations to the QSS algebraic relations and (ii) identify the optimal sequence for the explicit solution of the concentration of the QSS species. With the automatic generation of the relevant source code, the resulting reduced models can be readily implemented into numerical codes.
A comparative study of serial and parallel aeroelastic computations of wings
NASA Technical Reports Server (NTRS)
Byun, Chansup; Guruswamy, Guru P.
1994-01-01
A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.
NASA Astrophysics Data System (ADS)
Cubillas, J. E.; Japitana, M.
2016-06-01
This study demonstrates the application of CIELAB, Color intensity, and One Dimensional Scalar Constancy as features for image recognition and classifying benthic habitats in an image with the coastal areas of Hinatuan, Surigao Del Sur, Philippines as the study area. The study area is composed of four datasets, namely: (a) Blk66L005, (b) Blk66L021, (c) Blk66L024, and (d) Blk66L0114. SVM optimization was performed in Matlab® software with the help of Parallel Computing Toolbox to hasten the SVM computing speed. The image used for collecting samples for SVM procedure was Blk66L0114 in which a total of 134,516 sample objects of mangrove, possible coral existence with rocks, sand, sea, fish pens and sea grasses were collected and processed. The collected samples were then used as training sets for the supervised learning algorithm and for the creation of class definitions. The learned hyper-planes separating one class from another in the multi-dimensional feature space can be thought of as a super feature which will then be used in developing the C (classifier) rule set in eCognition® software. The classification results of the sampling site yielded an accuracy of 98.85% which confirms the reliability of remote sensing techniques and analysis employed to orthophotos like the CIELAB, Color Intensity and One dimensional scalar constancy and the use of SVM classification algorithm in classifying benthic habitats.
NASA Technical Reports Server (NTRS)
Stetson, Howard K.; Frank, Jeremy; Cornelius, Randy; Haddock, Angie; Wang, Lui; Garner, Larry
2015-01-01
NASA is investigating a range of future human spaceflight missions, including both Mars-distance and Near Earth Object (NEO) targets. Of significant importance for these missions is the balance between crew autonomy and vehicle automation. As distance from Earth results in increasing communication delays, future crews need both the capability and authority to independently make decisions. However, small crews cannot take on all functions performed by ground today, and so vehicles must be more automated to reduce the crew workload for such missions. NASA's Advanced Exploration Systems Program funded Autonomous Mission Operations (AMO) project conducted an autonomous command and control experiment on-board the International Space Station that demonstrated single action intelligent procedures for crew command and control. The target problem was to enable crew initialization of a facility class rack with power and thermal interfaces, and involving core and payload command and telemetry processing, without support from ground controllers. This autonomous operations capability is enabling in scenarios such as initialization of a medical facility to respond to a crew medical emergency, and representative of other spacecraft autonomy challenges. The experiment was conducted using the Expedite the Processing of Experiments for Space Station (EXPRESS) rack 7, which was located in the Port 2 location within the U.S Laboratory onboard the International Space Station (ISS). Activation and deactivation of this facility is time consuming and operationally intensive, requiring coordination of three flight control positions, 47 nominal steps, 57 commands, 276 telemetry checks, and coordination of multiple ISS systems (both core and payload). Utilization of Draper Laboratory's Timeliner software, deployed on-board the ISS within the Command and Control (C&C) computers and the Payload computers, allowed development of the automated procedures specific to ISS without having to certify and employ novel software for procedure development and execution. The procedures contained the ground procedure logic and actions as possible to include fault detection and recovery capabilities.
Alignment of high-throughput sequencing data inside in-memory databases.
Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias
2014-01-01
In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.
2007-01-01
Objective To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Design Cross sectional comparison of four computer games. Setting Research laboratories. Participants Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Main outcome measure Predicted energy expenditure, compared using repeated measures analysis of variance. Results Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kJ/kg/min greater when playing active rather than sedentary games. Conclusions Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children. PMID:18156227
Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T
2007-12-22
To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Research laboratories. Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kJ/kg/min greater when playing active rather than sedentary games. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.
Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
[Indication for antimycotic therapy for tracheobronchial candidosis under artificial ventilation].
Grossherr, M; Sedemund-Adib, Beate; Klotz, K-F
2005-01-01
Tracheobronchial candidosis is an impetuous complication in intensive care medicine. This article presents a concept to compare diagnostic procedure, Candida species and resistant species of different intensive care units with each other. This concept should encourage bench marking between similar intensive care units. The report and retrospective analysis of the intensive care course offer the opportunity to reflect own decisions and to adjust them to the current therapy strategies. Both procedures should improve the antimycotic therapy for intensive care units and should avoid the occurrence of resistant species. Candida species are often detected in the respiratory system of ventilated patients in intensive care, but this alone is no indication for antimycotic therapy. A strict retention is recommended, but this retention is diminished by an unclear infection, critical situation of the patient in the case of multiple organ failure, additional infection and long term ventilation. A therapy strategy for individual situations should be established and a close diagnostic procedure should be performed. A positive blood culture or detection of Candida species in two or more diagnostic materials indicate an early antimycotic therapy.
Simple proof of equivalence between adiabatic quantum computation and the circuit model.
Mizel, Ari; Lidar, Daniel A; Mitchell, Morgan
2007-08-17
We prove the equivalence between adiabatic quantum computation and quantum computation in the circuit model. An explicit adiabatic computation procedure is given that generates a ground state from which the answer can be extracted. The amount of time needed is evaluated by computing the gap. We show that the procedure is computationally efficient.
Effect of preparation procedures on intensity of radioautographic labeling is studied
NASA Technical Reports Server (NTRS)
Baserga, R.; Kisieleski, W. E.
1967-01-01
Effects of tissue preparation and extractive procedures on the intensity of radioautographic labeling are presented in terms of mean grain count per cell in cells labeled with tritiated precursors of proteins or nucleic acids. This information would be of interest to medical researchers and cytologists.
NASA Astrophysics Data System (ADS)
Seamon, E.; Gessler, P. E.; Flathers, E.
2015-12-01
The creation and use of large amounts of data in scientific investigations has become common practice. Data collection and analysis for large scientific computing efforts are not only increasing in volume as well as number, the methods and analysis procedures are evolving toward greater complexity (Bell, 2009, Clarke, 2009, Maimon, 2010). In addition, the growth of diverse data-intensive scientific computing efforts (Soni, 2011, Turner, 2014, Wu, 2008) has demonstrated the value of supporting scientific data integration. Efforts to bridge this gap between the above perspectives have been attempted, in varying degrees, with modular scientific computing analysis regimes implemented with a modest amount of success (Perez, 2009). This constellation of effects - 1) an increasing growth in the volume and amount of data, 2) a growing data-intensive science base that has challenging needs, and 3) disparate data organization and integration efforts - has created a critical gap. Namely, systems of scientific data organization and management typically do not effectively enable integrated data collaboration or data-intensive science-based communications. Our research efforts attempt to address this gap by developing a modular technology framework for data science integration efforts - with climate variation as the focus. The intention is that this model, if successful, could be generalized to other application areas. Our research aim focused on the design and implementation of a modular, deployable technology architecture for data integration. Developed using aspects of R, interactive python, SciDB, THREDDS, Javascript, and varied data mining and machine learning techniques, the Modular Data Response Framework (MDRF) was implemented to explore case scenarios for bio-climatic variation as they relate to pacific northwest ecosystem regions. Our preliminary results, using historical NETCDF climate data for calibration purposes across the inland pacific northwest region (Abatzoglou, Brown, 2011), show clear ecosystems shifting over a ten-year period (2001-2011), based on multiple supervised classifier methods for bioclimatic indicators.
Salas, Rosa Ana; Pleite, Jorge
2013-01-01
We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283
2014-01-01
Object There is wide regional variability in the volume of procedures performed for similar surgical patients throughout the United States. We investigated the association of the intensity of neurosurgical care (defined as the average annual number of neurosurgical procedures per capita) with mortality, length of stay (LOS), and rate of unfavorable discharge for inpatients after neurosurgical procedures. Methods We performed a retrospective cohort study involving the 202,518 patients who underwent cranial neurosurgical procedures from 2005–2010 and were registered in the National Inpatient Sample (NIS) database. Regression techniques were used to investigate the association of the average intensity of neurosurgical care with the average mortality, LOS, and rate of unfavorable discharge. Results The inpatient neurosurgical mortality, rate of unfavorable discharge, and average LOS varied significantly among several states. In a multivariate analysis male gender, coverage by Medicaid, and minority racial status were associated with increased mortality, rate of unfavorable discharge, and LOS. The opposite was true for coverage by private insurance, higher income, fewer comorbidities and small hospital size. There was no correlation of the intensity of neurosurgical care with the mortality (Pearson's ρ = −0.18, P = 0.29), rate of unfavorable discharge (Pearson's ρ = 0.08, P = 0.62), and LOS of cranial neurosurgical procedures (Pearson's ρ = −0.21, P = 0.22). Conclusions We observed significant disparities in mortality, LOS, and rate of unfavorable discharge for cranial neurosurgical procedures in the United States. Increased intensity of neurosurgical care was not associated with improved outcomes. PMID:24647225
Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.
Das, Biman; Drake, Eli; Jack, John
2004-02-01
Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.
Design ATE systems for complex assemblies
NASA Astrophysics Data System (ADS)
Napier, R. S.; Flammer, G. H.; Moser, S. A.
1983-06-01
The use of ATE systems in radio specification testing can reduce the test time by approximately 90 to 95 percent. What is more, the test station does not require a highly trained operator. Since the system controller has full power over all the measurements, human errors are not introduced into the readings. The controller is immune to any need to increase output by allowing marginal units to pass through the system. In addition, the software compensates for predictable, repeatable system errors, for example, cabling losses, which are an inherent part of the test setup. With no variation in test procedures from unit to unit, there is a constant repeatability factor. Preparing the software, however, usually entails considerable expense. It is pointed out that many of the problems associated with ATE system software can be avoided with the use of a software-intensive, or computer-intensive, system organization. Its goal is to minimize the user's need for software development, thereby saving time and money.
NASA Technical Reports Server (NTRS)
Jemian, Wartan A.
1986-01-01
Weld radiograph enigmas are features observed on X-ray radiographs of welds. Some of these features resemble indications of weld defects, although their origin is different. Since they are not understood, they are a source of concern. There is a need to identify their causes and especially to measure their effect on weld mechanical properties. A method is proposed whereby the enigmas can be evaluated and rated, in relation to the full spectrum of weld radiograph indications. Thie method involves a signature and a magnitude that can be used as a quantitive parameter. The signature is generated as the diference between the microdensitometer trace across the radiograph and the computed film intensity derived from a thickness scan along the corresponding region of the sample. The magnitude is the measured difference in intensity between the peak and base line values of the signature. The procedure is demonstated by comparing traces across radiographs of a weld sample before and after the introduction of a hole and by a system based on a MacIntosh mouse used for surface profiling.
Fougère, S; Beydon, L; Saulnier, F
2008-10-01
Medical devices are known to carry risks from design to scrap. Accident reports in ICU show that medical device account for only 20% of accidents. Formation of users and providing a postmarketing incident reporting are thus essential in health institutions. Clinical and engineering departments should cooperate to produce and secure procedures which should be applied during the lifetime of each clinical device. Several points should be especially fulfilled: close cooperation between clinical departments and biomedical engineering departments with available technicians, computer-based inventory of all devices, evaluation of specifications required before purchasing a new device, education of users on utilisation and maintenance, technical follow-up of devices and keeping maintenance and repair logs, ability to provide users with replacement devices, provision of check-lists before use, forging criteria to decide when device should be discarded. These principles are simple and should be considered as mandatory in order to improve medical device related security.
48 CFR 227.7203-11 - Contractor procedures and records.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...
48 CFR 227.7203-11 - Contractor procedures and records.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...
48 CFR 227.7203-11 - Contractor procedures and records.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...
48 CFR 227.7203-11 - Contractor procedures and records.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...
48 CFR 227.7203-11 - Contractor procedures and records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...
Practical pulse engineering: Gradient ascent without matrix exponentiation
NASA Astrophysics Data System (ADS)
Bhole, Gaurav; Jones, Jonathan A.
2018-06-01
Since 2005, there has been a huge growth in the use of engineered control pulses to perform desired quantum operations in systems such as nuclear magnetic resonance quantum information processors. These approaches, which build on the original gradient ascent pulse engineering algorithm, remain computationally intensive because of the need to calculate matrix exponentials for each time step in the control pulse. In this study, we discuss how the propagators for each time step can be approximated using the Trotter-Suzuki formula, and a further speedup achieved by avoiding unnecessary operations. The resulting procedure can provide substantial speed gain with negligible costs in the propagator error, providing a more practical approach to pulse engineering.
Telemedicine in Anesthesiology and Reanimatology
Tafro, Lejla; Masic, Izet
2010-01-01
Review SUMMARY In recent years impressive progress is happening in information and telecommunication technologies. The application of computers in medicine allows permanent data storage, data transfer from one place to another, retrieving and data processing, data availability at all times, monitoring of patients over time, etc. This can significantly improve the medical profession. Medicine is one of the most intensive users of all types of information and telecommunication technology. Quickly and reliably store and transfer data (text, images, sounds, etc.) provides significant assistance and improvement in almost all medical procedures. In addition, data in locations far from medical centers can be of invaluable benefit, especially in emergency cases in which the decisive role has anesthesiologists. PMID:24222933
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Salat, Anna; Devoto, Walter; Manauta, Jordi
2011-01-01
Achieving similar features to those on natural teeth is a common problem with esthetic restorations. Color matching is a fundamental procedure required to perform a predictable composite resin restoration. It is no longer enough to measure these criteria with conventional shade guides, which provide the hue and chroma, but do not take into account other dimensions of the tooth such as value, intensives, opalescence and characterizations. The present article presents a simple and effective technique for color selection using a digital photograph of the tooth and an image-editing program such as Adobe Photoshop or Picture Project. The digital editing of the photograph with two simple steps described in this paper reveals the internal structures of the tooth easily. The modified photographs highlight the opalescence, white spots, shape of the internal mammelons and other features that are not visible at first glance. This procedure provides an accurate color chart with which the clinician can begin an esthetic restoration process.
Treetrimmer: a method for phylogenetic dataset size reduction.
Maruyama, Shinichiro; Eveleigh, Robert J M; Archibald, John M
2013-04-12
With rapid advances in genome sequencing and bioinformatics, it is now possible to generate phylogenetic trees containing thousands of operational taxonomic units (OTUs) from a wide range of organisms. However, use of rigorous tree-building methods on such large datasets is prohibitive and manual 'pruning' of sequence alignments is time consuming and raises concerns over reproducibility. There is a need for bioinformatic tools with which to objectively carry out such pruning procedures. Here we present 'TreeTrimmer', a bioinformatics procedure that removes unnecessary redundancy in large phylogenetic datasets, alleviating the size effect on more rigorous downstream analyses. The method identifies and removes user-defined 'redundant' sequences, e.g., orthologous sequences from closely related organisms and 'recently' evolved lineage-specific paralogs. Representative OTUs are retained for more rigorous re-analysis. TreeTrimmer reduces the OTU density of phylogenetic trees without sacrificing taxonomic diversity while retaining the original tree topology, thereby speeding up downstream computer-intensive analyses, e.g., Bayesian and maximum likelihood tree reconstructions, in a reproducible fashion.
Performance of ground attitude determination procedures for HEAO-1
NASA Technical Reports Server (NTRS)
Fallon, L., III; Sturch, C. R.
1978-01-01
Ground attitude support for HEAO 1 provided at GSFC by the HEAO 1 Attitude Ground Support System (AGSS) is described. Information telemetered from Sun sensors, gyroscopes, star trackers, and an onboard computer are used by the AGSS to compute updates to the onboard attitude reference and gyro calibration parameters. The onboard computer utilizes these updates in providing continuous attitudes (accurate to 0.25degree) for use in the observatory's attitude control procedures. The relationship between HEAO 1 onboard and ground processing, the procedures used by the AGSS in computing attitude and gyro calibration updates, and the performance of these procedures in the HEAO 1 postlaunch environment is discussed.
Safe paediatric intensive care. Part 1: Does more medical care lead to improved outcome?
Frey, Bernhard; Argent, Andrew
2004-06-01
Neonatal and paediatric intensive care has improved the prognosis for seriously sick infants and children. This has happened because of a pragmatic approach focused on stabilisation of vital functions and immense technological advances in diagnostic and therapeutic procedures. However, the belief that more medical care must inevitably lead to improved health is increasingly being questioned. This issue is especially relevant in developing countries where the introduction of highly specialised paediatric intensive care may not lead to an overall fall in child mortality. Even in developed countries, the complexity and availability of therapeutics and invasive procedures may put seriously ill children at additional risk. In both developing and industrialised countries the use of safe and simple procedures for appropriate periods, particular attention to drug prescription patterns and selection of appropriate aims and modes of therapy, including non-invasive methods, may minimise the risks of paediatric intensive care.
Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn
2009-01-01
In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Bernhard Christian, E-mail: bernhard.meyer@charite.de; Frericks, Bernd Benedikt; Albrecht, Thomas
2007-07-15
C-Arm cone-beam computed tomography (CACT), is a relatively new technique that uses data acquired with a flat-panel detector C-arm angiography system during an interventional procedure to reconstruct CT-like images. The purpose of this Technical Note is to present the technique, feasibility, and added value of CACT in five patients who underwent abdominal transarterial chemoembolization procedures. Target organs for the chemoembolizations were kidney, liver, and pancreas and a liposarcoma infiltrating the duodenum. The time for patient positioning, C-arm and system preparation, CACT raw data acquisition, and data reconstruction for a single CACT study ranged from 6 to 12 min. The volumemore » data set produced by the workstation was interactively reformatted using maximum intensity projections and multiplanar reconstructions. As part of an angiography system CACT provided essential information on vascular anatomy, therapy endpoints, and immediate follow-up during and immediately after the abdominal interventions without patient transfer. The quality of CACT images was sufficient to influence the course of treatment. This technology has the potential to expedite any interventional procedure that requires three-dimensional information and navigation.« less
Fermilab computing at the Intensity Frontier
Group, Craig; Fuess, S.; Gutsche, O.; ...
2015-12-23
The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less
Gonzalez, Karla; Ulloa, Jesus G; Moreno, Gerardo; Echeverría, Oscar; Norris, Keith; Talamantes, Efrain
2017-10-23
Latinos in the U.S. are almost twice as likely to progress to End Stage Renal disease (ESRD) compared to non-Latino whites. Patients with ESRD on dialysis experience high morbidity, pre-mature mortality and receive intensive procedures at the end of life (EOL). This study explores intensive procedure preferences at the EOL in older Latino adults. Seventy-three community-dwelling Spanish- and English-Speaking Latinos over the age of 60 with and without ESRD participated in this study. Those without ESRD (n = 47) participated in one of five focus group sessions, and those with ESRD on dialysis (n = 26) participated in one-on-one semi-structured interviews. Focus group and individual participants answered questions regarding intensive procedures at the EOL. Recurring themes were identified using standard qualitative content-analysis methods. Participants also completed a brief survey that included demographics, language preference, health insurance coverage, co-morbidities, Emergency Department visits and functional limitations. The majority of participants were of Mexican origin with mean age of 70, and there were more female participants in the non-ESRD group, compared to the ESRD dialysis dependent group. The dialysis group reported a higher number of co-morbidities and functional limitations. Nearly 69% of those in the dialysis group reported one or more emergency department visits in the past year, compared to 38% in the non-ESRD group. Primary themes centered on 1) The acceptability of a "natural" versus "invasive" procedure 2) Cultural traditions and family involvement 3) Level of trust in physicians and autonomy in decision-making. Our results highlight the need for improved patient- and family-centered approaches to better understand intensive procedure preferences at the EOL in this underserved population of older adults.
Automatic detection of spiculation of pulmonary nodules in computed tomography images
NASA Astrophysics Data System (ADS)
Ciompi, F.; Jacobs, C.; Scholten, E. T.; van Riel, S. J.; W. Wille, M. M.; Prokop, M.; van Ginneken, B.
2015-03-01
We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate the two categories, information on morphology is captured by sampling intensity profiles along circular patterns on spherical surfaces centered on the nodule, in a multi-scale fashion. Each intensity profile is interpreted as a periodic signal, where the Fourier transform is applied, obtaining a spectrum. A library of spectra is created by clustering data via unsupervised learning. The centroids of the clusters are used to label back each spectrum in the sampling pattern. A compact descriptor encoding the nodule morphology is obtained as the histogram of labels along all the spherical surfaces and used to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation.
Mapping the continuous reciprocal space intensity distribution of X-ray serial crystallography.
Yefanov, Oleksandr; Gati, Cornelius; Bourenkov, Gleb; Kirian, Richard A; White, Thomas A; Spence, John C H; Chapman, Henry N; Barty, Anton
2014-07-17
Serial crystallography using X-ray free-electron lasers enables the collection of tens of thousands of measurements from an equal number of individual crystals, each of which can be smaller than 1 µm in size. This manuscript describes an alternative way of handling diffraction data recorded by serial femtosecond crystallography, by mapping the diffracted intensities into three-dimensional reciprocal space rather than integrating each image in two dimensions as in the classical approach. We call this procedure 'three-dimensional merging'. This procedure retains information about asymmetry in Bragg peaks and diffracted intensities between Bragg spots. This intensity distribution can be used to extract reflection intensities for structure determination and opens up novel avenues for post-refinement, while observed intensity between Bragg peaks and peak asymmetry are of potential use in novel direct phasing strategies.
Three-Dimensional Analysis and Modeling of a Wankel Engine
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1991-01-01
A new computer code, AGNI-3D, has been developed for the modeling of combustion, spray, and flow properties in a stratified-charge rotary engine (SCRE). The mathematical and numerical details of the new code are described by the first author in a separate NASA publication. The solution procedure is based on an Eulerian-Lagrangian approach where the unsteady, three-dimensional Navier-Stokes equations for a perfect gas-mixture with variable properties are solved in generalized, Eulerian coordinates on a moving grid by making use of an implicit finite-volume, Steger-Warming flux vector splitting scheme. The liquid-phase equations are solved in Lagrangian coordinates. The engine configuration studied was similar to existing rotary engine flow-visualization and hot-firing test rigs. The results of limited test cases indicate a good degree of qualitative agreement between the predicted and measured pressures. It is conjectured that the impulsive nature of the torque generated by the observed pressure nonuniformity may be one of the mechanisms responsible for the excessive wear of the timing gears observed during the early stages of the rotary combustion engine (RCE) development. It was identified that the turbulence intensities near top-dead-center were dominated by the compression process and only slightly influenced by the intake and exhaust processes. Slow mixing resulting from small turbulence intensities within the rotor pocket and also from a lack of formation of any significant recirculation regions within the rotor pocket were identified as the major factors leading to incomplete combustion. Detailed flowfield results during exhaust and intake, fuel injection, fuel vaporization, combustion, mixing and expansion processes are also presented. The numerical procedure is very efficient as it takes 7 to 10 CPU hours on a CRAY Y-MP for one entire engine cycle when the computations are performed over a 31 x16 x 20 grid.
Geant4 Computing Performance Benchmarking and Monitoring
Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...
2015-12-23
Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less
47 CFR 1.2202 - Competitive bidding design options.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Section 1.2202 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants...) Procedures that utilize mathematical computer optimization software, such as integer programming, to evaluate... evaluating bids using a ranking based on specified factors. (B) Procedures that combine computer optimization...
NASA Astrophysics Data System (ADS)
Ganimedov, V. L.; Papaeva, E. O.; Maslov, N. A.; Larionov, P. M.
2017-09-01
Development of cell-mediated scaffold technologies for the treatment of critical bone defects is very important for the purpose of reparative bone regeneration. Today the properties of the bioreactor for cell-seeded scaffold cultivation are the subject of intensive research. We used the mathematical modeling of rotational reactor and construct computational algorithm with the help of ANSYS software package to develop this new procedure. The solution obtained with the help of the constructed computational algorithm is in good agreement with the analytical solution of Couette for the task of two coaxial cylinders. The series of flow computations for different rotation frequencies (1, 0.75, 0.5, 0.33, 1.125 Hz) was performed for the laminar flow regime approximation with the help of computational algorithm. It was found that Taylor vortices appear in the annular gap between the cylinders in a simulated bioreactor. It was obtained that shear stress in the range of interest (0.002-0.1 Pa) arise on outer surface of inner cylinder when it rotates with the frequency not exceeding 0.8 Hz. So the constructed mathematical model and the created computational algorithm for calculating the flow parameters allow predicting the shear stress and pressure values depending on the rotation frequency and geometric parameters, as well as optimizing the operating mode of the bioreactor.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Development of a General Form CO 2 and Brine Flux Input Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansoor, K.; Sun, Y.; Carroll, S.
2014-08-01
The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less
Image-guided transnasal cryoablation of a recurrent nasal adenocarcinoma in a dog.
Murphy, S M; Lawrence, J A; Schmiedt, C W; Davis, K W; Lee, F T; Forrest, L J; Bjorling, D E
2011-06-01
An eight-year-old female spayed Airedale terrier with rapid recurrence of a nasal adenocarcinoma following image-guided intensity-modulated radiation therapy was treated with transnasal, image-guided cryotherapy. Ice ball size and location were monitored real-time with computed tomography-fluoroscopy to verify that the entire tumour was enveloped in ice. Serial computed tomography scans demonstrated reduction in and subsequent resolution of the primary tumour volume corresponding visually with the ice ball imaged during the ablation procedure. Re-imaging demonstrated focallysis of the cribriform plate following ablation that spontaneously resolved by 13 months. While mild chronic nasal discharge developed following cryoablation, no other clinical signs of local nasal neoplasia were present. Twenty-one months after nasal tumour cryoablation the dog was euthanased as a result of acute haemoabdomen. Image-guided cryotherapy may warrant further investigation for the management of focal residual or recurrent tumours in dogs, especially in regions where critical structures preclude surgical intervention. © 2011 British Small Animal Veterinary Association.
Li, Jia; Lam, Edmund Y
2014-04-21
Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
Girard, Erin E; Al-Ahmad, Amin A; Rosenberg, Jarrett; Luong, Richard; Moore, Teri; Lauritsch, Günter; Boese, Jan; Fahrig, Rebecca
2011-01-01
Objectives The purpose of this study was to evaluate use of cardiac C-arm computed tomography (CT) in the assessment of the dimensions and temporal characteristics of radiofrequency ablation (RFA) lesions. This imaging modality uses a standard C-arm fluoroscopy system rotating around the patient, providing CT-like images during the RFA procedure. Background Both magnetic resonance imaging (MRI) and CT can be used to assess myocardial necrotic tissue. Several studies have reported visualizing cardiac RF ablation lesions with MRI, however obtaining MR images during interventional procedures is not common practice. Direct visualization of RFA lesions using C-arm CT during the procedure may improve outcomes and circumvent complications associated with cardiac ablation procedures. Methods RFA lesions were created on the endocardial surface of the left ventricle of 9 swine using a 7-F RF ablation catheter. An ECG-gated C-arm CT imaging protocol was used to acquire projection images during iodine contrast injection and following the injection every 5 min for up to 30 min, with no additional contrast. Reconstructed images were analyzed offline. The mean and standard deviation of the signal intensity of the lesion and normal myocardium were measured in all images in each time series. Lesion dimensions and area were measured and compared in pathologic specimens and C-arm CT images. Results All ablation lesions (n=29) were visualized and lesion dimensions, as measured on C-arm CT, correlated well with postmortem tissue measurements (1D dimensions : concordance correlation = 0.87; area : concordance correlation = 0.90). Lesions were visualized as a perfusion defect on first-pass C-arm CT images with a signal intensity 95 HU lower than normal myocardium (95% confidence interval: -111 to -79 HU). Images acquired at 1 and 5 minutes exhibited an enhancing ring surrounding the perfusion defect in 24 (83%) lesions. Conclusions RFA lesion size, including transmurality, can be assessed using ECG-gated cardiac C-arm CT in the interventional suite. Visualization of RFA lesions using cardiac C-arm CT may facilitate the assessment of adequate lesion delivery and provide valuable feedback during cardiac ablation procedures. PMID:21414574
12 CFR 1209.17 - Time computations.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Time computations. 1209.17 Section 1209.17... PROCEDURE Rules of Practice and Procedure § 1209.17 Time computations. (a) General rule. In computing any period of time prescribed or allowed under this part, the date of the act or event that commences the...
12 CFR 1209.17 - Time computations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Time computations. 1209.17 Section 1209.17... PROCEDURE Rules of Practice and Procedure § 1209.17 Time computations. (a) General rule. In computing any period of time prescribed or allowed under this part, the date of the act or event that commences the...
12 CFR 1209.17 - Time computations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Time computations. 1209.17 Section 1209.17... PROCEDURE Rules of Practice and Procedure § 1209.17 Time computations. (a) General rule. In computing any period of time prescribed or allowed under this part, the date of the act or event that commences the...
Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing
ERIC Educational Resources Information Center
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua
2010-01-01
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
Ewers, R; Schicho, K; Undt, G; Wanschitz, F; Truppe, M; Seemann, R; Wagner, A
2005-01-01
Computer-aided surgical navigation technology is commonly used in craniomaxillofacial surgery. It offers substantial improvement regarding esthetic and functional aspects in a range of surgical procedures. Based on augmented reality principles, where the real operative site is merged with computer generated graphic information, computer-aided navigation systems were employed, among other procedures, in dental implantology, arthroscopy of the temporomandibular joint, osteotomies, distraction osteogenesis, image guided biopsies and removals of foreign bodies. The decision to perform a procedure with or without computer-aided intraoperative navigation depends on the expected benefit to the procedure as well as on the technical expenditure necessary to achieve that goal. This paper comprises the experience gained in 12 years of research, development and routine clinical application. One hundred and fifty-eight operations with successful application of surgical navigation technology--divided into five groups--are evaluated regarding the criteria "medical benefit" and "technical expenditure" necessary to perform these procedures. Our results indicate that the medical benefit is likely to outweight the expenditure of technology with few exceptions (calvaria transplant, resection of the temporal bone, reconstruction of the orbital floor). Especially in dental implantology, specialized software reduces time and additional costs necessary to plan and perform procedures with computer-aided surgical navigation.
47 CFR 1.958 - Distance computation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958 Distance computation. The method...
47 CFR 1.958 - Distance computation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958 Distance computation. The method...
Wu, L C; D'Amelio, F; Fox, R A; Polyakov, I; Daunton, N G
1997-06-06
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
Modeling and Validation of Microwave Ablations with Internal Vaporization
Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.
2014-01-01
Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481
NASA Technical Reports Server (NTRS)
Wu, L. C.; D'Amelio, F.; Fox, R. A.; Polyakov, I.; Daunton, N. G.
1997-01-01
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
Vision-Based UAV Flight Control and Obstacle Avoidance
2006-01-01
denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion
From cosmos to connectomes: the evolution of data-intensive science.
Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S
2014-09-17
The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.
Reis, H; Rasulev, B; Papadopoulos, M G; Leszczynski, J
2015-01-01
Fullerene and its derivatives are currently one of the most intensively investigated species in the area of nanomedicine and nanochemistry. Various unique properties of fullerenes are responsible for their wide range applications in industry, biology and medicine. A large pool of functionalized C60 and C70 fullerenes is investigated theoretically at different levels of quantum-mechanical theory. The semiempirial PM6 method, density functional theory with the B3LYP functional, and correlated ab initio MP2 method are employed to compute the optimized structures, and an array of properties for the considered species. In addition to the calculations for isolated molecules, the results of solution calculations are also reported at the DFT level, using the polarizable continuum model (PCM). Ionization potentials (IPs) and electron affinities (EAs) are computed by means of Koopmans' theorem as well as with the more accurate but computationally expensive ΔSCF method. Both procedures yield comparable values, while comparison of IPs and EAs computed with different quantum-mechanical methods shows surprisingly large differences. Harmonic vibrational frequencies are computed at the PM6 and B3LYP levels of theory and compared with each other. A possible application of the frequencies as 3D descriptors in the EVA (EigenVAlues) method is shown. All the computed data are made available, and may be used to replace experimental data in routine applications where large amounts of data are required, e.g. in structure-activity relationship studies of the toxicity of fullerene derivatives.
Influence of Race on Inpatient Treatment Intensity at the End of Life
Chang, Chung-Chou H.; Saynina, Olga; Garber, Alan M.
2007-01-01
OBJECTIVE To examine inpatient intensive care unit (ICU) and intensive procedure use by race among Medicare decedents, using utilization among survivors for comparison. DESIGN Retrospective observational analysis of inpatient claims using multivariable hierarchical logistic regression. SETTING United States, 1989–1999. PARTICIPANTS Hospitalized Medicare fee-for-service decedents (n = 976,220) and survivors (n = 845,306) aged 65 years or older. MEASUREMENTS AND MAIN RESULTS Admission to the ICU and use of one or more intensive procedures over 12 months, and, for inpatient decedents, during the terminal admission. Black decedents with one or more hospitalization in the last 12 months of life were slightly more likely than nonblacks to be admitted to the ICU during the last 12 months (49.3% vs. 47.4%, p <.0001) and the terminal hospitalization (41.9% vs. 40.6%, p < 0.0001), but these differences disappeared or attenuated in multivariable hierarchical logistic regressions (last 12 months adjusted odds ratio (AOR) 1.0 [0.99–1.03], p = .36; terminal hospitalization AOR 1.03 [1.0–1.06], p = .01). Black decedents were more likely to undergo an intensive procedure during the last 12 months (49.6% vs. 42.8%, p < .0001) and the terminal hospitalization (37.7% vs, 31.1%, p < .0001), a difference that persisted with adjustment (last 12 months AOR 1.1 [1.08–1.14], p < .0001; terminal hospitalization AOR 1.23 [1.20–1.26], p < .0001). Patterns of differences in inpatient treatment intensity by race were reversed among survivors: blacks had lower rates of ICU admission (31.2% vs. 32.4%, p < .0001; AOR 0.93 [0.91–0.95], p < .0001) and intensive procedure use (36.6% vs. 44.2%; AOR 0.72 [0.70–0.73], p <.0001). These differences were driven by greater use by blacks of life-sustaining treatments that predominate among decedents but lesser use of cardiovascular and orthopedic procedures that predominate among survivors. A hospital’s black census was a strong predictor of inpatient end-of-life treatment intensity. CONCLUSIONS Black decedents were treated more intensively during hospitalization than nonblack decedents, whereas black survivors were treated less intensively. These differences are strongly associated with a hospital’s black census. The causes and consequences of these hospital-level differences in intensity deserve further study. PMID:17356965
ERIC Educational Resources Information Center
Alalo, Fadeelah Mansour Ahmed; Ahmad, Awatef El Sayed; El Sayed, Hoda Mohamed Nafee
2016-01-01
Venipuncture and other invasive procedures as blood draws, intramuscular injections or heel pricks are the most commonly performed painful procedures in children. These can be a terrifying and painful experience for children and their families. The present study aimed to identify Pain intensity after an ice pack application prior to venipuncture…
NASA Astrophysics Data System (ADS)
Candela, A.; Brigandì, G.; Aronica, G. T.
2014-07-01
In this paper a procedure to derive synthetic flood design hydrographs (SFDH) using a bivariate representation of rainfall forcing (rainfall duration and intensity) via copulas, which describes and models the correlation between two variables independently of the marginal laws involved, coupled with a distributed rainfall-runoff model, is presented. Rainfall-runoff modelling (R-R modelling) for estimating the hydrological response at the outlet of a catchment was performed by using a conceptual fully distributed procedure based on the Soil Conservation Service - Curve Number method as an excess rainfall model and on a distributed unit hydrograph with climatic dependencies for the flow routing. Travel time computation, based on the distributed unit hydrograph definition, was performed by implementing a procedure based on flow paths, determined from a digital elevation model (DEM) and roughness parameters obtained from distributed geographical information. In order to estimate the primary return period of the SFDH, which provides the probability of occurrence of a hydrograph flood, peaks and flow volumes obtained through R-R modelling were treated statistically using copulas. Finally, the shapes of hydrographs have been generated on the basis of historically significant flood events, via cluster analysis. An application of the procedure described above has been carried out and results presented for the case study of the Imera catchment in Sicily, Italy.
Boundary condition computational procedures for inviscid, supersonic steady flow field calculations
NASA Technical Reports Server (NTRS)
Abbett, M. J.
1971-01-01
Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials
NASA Technical Reports Server (NTRS)
Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.
2005-01-01
The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.
10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Procedures for the Computation of the Real Cost of Capital I Appendix I to Part 504 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS EXISTING POWERPLANTS Pt. 504, App. I Appendix I to Part 504—Procedures for the Computation of the Real Cost of Capital (a) The firm's real after-tax weighted average...
ERIC Educational Resources Information Center
Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee
2011-01-01
This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…
Zimmerman, Janice L; Sprung, Charles L
2010-04-01
To provide recommendations and standard operating procedures for intensive care unit and hospital preparations for an influenza pandemic or mass disaster with a specific focus on ensuring that adequate resources are available and appropriate protocols are developed to safely perform procedures in patients with and without influenza illness. Based on a literature review and expert opinion, a Delphi process was used to define the essential topics including performing medical procedures. Key recommendations include: (1) specify high-risk procedures (aerosol generating-procedures); (2) determine if certain procedures will not be performed during a pandemic; (3) develop protocols for safe performance of high-risk procedures that include appropriateness, qualifications of personnel, site, personal protection equipment, safe technique and equipment needs; (4) ensure adequate training of personnel in high-risk procedures; (5) procedures should be performed at the bedside whenever possible; (6) ensure safe respiratory therapy practices to avoid aerosols; (7) provide safe respiratory equipment; and (8) determine criteria for cancelling and/or altering elective procedures. Judicious planning and adoption of protocols for safe performance of medical procedures are necessary to optimize outcomes during a pandemic.
NASA Astrophysics Data System (ADS)
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
Putzer, David; Moctezuma, Jose Luis; Nogler, Michael
2017-11-01
An increasing number of orthopaedic surgeons are using computer aided planning tools for bone removal applications. The aim of the study was to consolidate a set of generic functions to be used for a 3D computer assisted planning or simulation. A limited subset of 30 surgical procedures was analyzed and verified in 243 surgical procedures of a surgical atlas. Fourteen generic functions to be used in 3D computer assisted planning and simulations were extracted. Our results showed that the average procedure comprises 14 ± 10 (SD) steps with ten different generic planning steps and four generic bone removal steps. In conclusion, the study shows that with a limited number of 14 planning functions it is possible to perform 243 surgical procedures out of Campbell's Operative Orthopedics atlas. The results may be used as a basis for versatile generic intraoperative planning software.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf
2013-08-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
An 'adding' algorithm for the Markov chain formalism for radiation transfer
NASA Technical Reports Server (NTRS)
Esposito, L. W.
1979-01-01
An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.
Simulating Laboratory Procedures.
ERIC Educational Resources Information Center
Baker, J. E.; And Others
1986-01-01
Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…
On the Solution of the Three-Dimensional Flowfield About a Flow-Through Nacelle. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Compton, William Bernard
1985-01-01
The solution of the three dimensional flow field for a flow through nacelle was studied. Both inviscid and viscous inviscid interacting solutions were examined. Inviscid solutions were obtained with two different computational procedures for solving the three dimensional Euler equations. The first procedure employs an alternating direction implicit numerical algorithm, and required the development of a complete computational model for the nacelle problem. The second computational technique employs a fourth order Runge-Kutta numerical algorithm which was modified to fit the nacelle problem. Viscous effects on the flow field were evaluated with a viscous inviscid interacting computational model. This model was constructed by coupling the explicit Euler solution procedure with a flag entrainment boundary layer solution procedure in a global iteration scheme. The computational techniques were used to compute the flow field for a long duct turbofan engine nacelle at free stream Mach numbers of 0.80 and 0.94 and angles of attack of 0 and 4 deg.
Effect of conductor geometry on source localization: Implications for epilepsy studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlitt, H.; Heller, L.; Best, E.
1994-07-01
We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less
An Early-Warning System for Volcanic Ash Dispersal: The MAFALDA Procedure
NASA Astrophysics Data System (ADS)
Barsotti, S.; Nannipieri, L.; Neri, A.
2006-12-01
Forecasts of the dispersal of volcanic ash is a fundamental goal in order to mitigate its potential impact on urbanized areas and transport routes surrounding explosive volcanoes. To this aim we developed an early- warning procedure named MAFALDA (Modeling And Forecasting Ash Loading and Dispersal in the Atmosphere). Such tool is able to quantitatively forecast the atmospheric concentration of ash as well as the ground deposition as a function of time over a 3D spatial domain.\\The main features of MAFALDA are: (1) the use of the hybrid Lagrangian-Eulerian code VOL-CALPUFF able to describe both the rising column phase and the atmospheric dispersal as a function of weather conditions, (2) the use of high-resolution weather forecasting data, (3) the short execution time that allows to analyse a set of scenarios and (4) the web-based CGI software application (written in Perl programming language) that shows the results in a standard graphical web interface and makes it suitable as an early-warning system during volcanic crises.\\MAFALDA is composed by a computational part that simulates the ash cloud dynamics and a graphical interface for visualizing the modelling results. The computational part includes the codes for elaborating the meteorological data, the dispersal code and the post-processing programs. These produces hourly 2D maps of aerial ash concentration at several vertical levels, extension of "threat" area on air and 2D maps of ash deposit on the ground, in addition to graphs of hourly variations of column height.\\The processed results are available on the web by the graphical interface and the users can choose, by drop-down menu, which data to visualize. \\A first partial application of the procedure has been carried out for Mt. Etna (Italy). In this case, the procedure simulates four volcanological scenarios characterized by different plume intensities and uses 48-hrs weather forecasting data with a resolution of 7 km provided by the Italian Air Force.
40 CFR 86.158-00 - Supplemental Federal Test Procedures; overview.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Procedures (SFTP). These test procedures consist of two separable test elements: A sequence of vehicle.../pound of dry air (approximately 40 percent relative humidity), and a solar heat load intensity of 850 W...
40 CFR 86.158-00 - Supplemental Federal Test Procedures; overview.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Procedures (SFTP). These test procedures consist of two separable test elements: A sequence of vehicle.../pound of dry air (approximately 40 percent relative humidity), and a solar heat load intensity of 850 W...
40 CFR 86.158-00 - Supplemental Federal Test Procedures; overview.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Procedures (SFTP). These test procedures consist of two separable test elements: A sequence of vehicle.../pound of dry air (approximately 40 percent relative humidity), and a solar heat load intensity of 850 W...
40 CFR 86.158-00 - Supplemental Federal Test Procedures; overview.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Procedures (SFTP). These test procedures consist of two separable test elements: A sequence of vehicle.../pound of dry air (approximately 40 percent relative humidity), and a solar heat load intensity of 850 W...
Analysis of intensity variability in multislice and cone beam computed tomography.
Nackaerts, Olivia; Maes, Frederik; Yan, Hua; Couto Souza, Paulo; Pauwels, Ruben; Jacobs, Reinhilde
2011-08-01
The aim of this study was to evaluate the variability of intensity values in cone beam computed tomography (CBCT) imaging compared with multislice computed tomography Hounsfield units (MSCT HU) in order to assess the reliability of density assessments using CBCT images. A quality control phantom was scanned with an MSCT scanner and five CBCT scanners. In one CBCT scanner, the phantom was scanned repeatedly in the same and in different positions. Images were analyzed using registration to a mathematical model. MSCT images were used as a reference. Density profiles of MSCT showed stable HU values, whereas in CBCT imaging the intensity values were variable over the profile. Repositioning of the phantom resulted in large fluctuations in intensity values. The use of intensity values in CBCT images is not reliable, because the values are influenced by device, imaging parameters and positioning. © 2011 John Wiley & Sons A/S.
Fracture toughness of brittle materials determined with chevron notch specimens
NASA Technical Reports Server (NTRS)
Shannon, J. L., Jr.; Bursey, R. T.; Munz, D.; Pierce, W. S.
1980-01-01
The use of chevron-notch specimens for determining the plane strain fracture toughness (K sub Ic) of brittle materials is discussed. Three chevron-notch specimens were investigated: short bar, short rod, and four-point-bend. The dimensionless stress intensity coefficient used in computing K sub Ic is derived for the short bar specimen from the superposition of ligament-dependent and ligament-independent solutions for the straight through crack, and also from experimental compliance calibrations. Coefficients for the four-point-bend specimen were developed by the same superposition procedure, and with additional refinement using the slice model of Bluhm. Short rod specimen stress intensity coefficients were determined only by experimental compliance calibration. Performance of the three chevron-notch specimens and their stress intensity factor relations were evaluated by tests on hot-pressed silicon nitride and sintered aluminum oxide. Results obtained with the short bar and the four-point-bend specimens on silicon nitride are in good agreement and relatively free of specimen geometry and size effects within the range investigated. Results on aluminum oxide were affected by specimen size and chevron-notch geometry, believed due to a rising crack growth resistance curve for the material. Only the results for the short bar specimen are presented in detail.
Risk factors associated with postoperative pain after ophthalmic surgery: a prospective study
Lesin, Mladen; Dzaja Lozo, Mirna; Duplancic-Sundov, Zeljka; Dzaja, Ivana; Davidovic, Nikolina; Banozic, Adriana; Puljak, Livia
2016-01-01
Background Risk factors associated with postoperative pain intensity and duration, as well as consumption of analgesics after ophthalmic surgery are poorly understood. Methods A prospective study was conducted among adults (N=226) who underwent eye surgery at the University Hospital Split, Croatia. A day before the surgery, the patients filled out questionnaires assessing personality, anxiety, pain catastrophizing, sociodemographics and were given details about the procedure, anesthesia, and analgesia for each postoperative day. All scales were previously used for the Croatian population. The intensity of pain was measured using a numerical rating scale from 0 to 10, where 0 was no pain and 10 was the worst imaginable pain. The intensity of pain was measured before the surgery and then 1 hour, 3 hours, 6 hours, and 24 hours after surgery, and then once a day until discharge from the hospital. Univariate and multivariate analyses were performed. Results A multivariate analysis indicated that independent predictors of average pain intensity after the surgery were: absence of premedication before surgery, surgery in general anesthesia, higher pain intensity before surgery and pain catastrophizing level. Independent predictors of postoperative pain duration were intensity of pain before surgery, type of anesthesia, and self-assessment of health. Independent predictors of pain intensity ≥5 during the first 6 hours after the procedure were the type of procedure, self-assessment of health, premedication, and the level of pain catastrophizing. Conclusion Awareness about independent predictors associated with average postoperative pain intensity, postoperative pain duration, and occurrence of intensive pain after surgery may help health workers to improve postoperative pain management in ophthalmic surgery. PMID:26858525
NASA Astrophysics Data System (ADS)
Caciuffo, Roberto; Esposti, Alessandra Degli; Deleuze, Michael S.; Leigh, David A.; Murphy, Aden; Paci, Barbara; Parker, Stewart F.; Zerbetto, Francesco
1998-12-01
The inelastic neutron scattering (INS) spectrum of the original benzylic amide [2]catenane is recorded and simulated by a semiempirical quantum chemical procedure coupled with the most comprehensive approach available to date, the CLIMAX program. The successful simulation of the spectrum indicates that the modified neglect of differential overlap (MNDO) model can reproduce the intramolecular vibrations of a molecular system as large as a catenane (136 atoms). Because of the computational costs involved and some numerical instabilities, a less expensive approach is attempted which involves the molecular mechanics-based calculation of the INS response in terms of the most basic formulation for the scattering activity. The encouraging results obtained validate the less computationally intensive procedure and allow its extension to the calculation of the INS spectrum for a second, theoretical, co-conformer, which, although structurally and energetically reasonable, is not, in fact, found in the solid state. The second structure was produced by a Monte Carlo simulated annealing method run in the conformational space (a procedure that would have been prohibitively expensive at the semiempirical level) and is characterized by a higher degree of intramolecular hydrogen bonding than the x-ray structure. The two alternative structures yield different simulated spectra, only one of which, the authentic one, is compatible with the experimental data. Comparison of the two simulated and experimental spectra affords the identification of an inelastic neutron scattering spectral signature of the correct hydrogen bonding motif in the region slightly above 700 cm-1. The study illustrates that combinations of simulated INS data and experimental results can be successfully used to discriminate between different proposed structures or possible hydrogen bonding motifs in large functional molecular systems.
Computer modeling of the stress-strain state of welded construction
NASA Astrophysics Data System (ADS)
Nurguzhin, Marat; Danenova, Gulmira; Akhmetzhanov, Talgat
2017-11-01
At the present time the maintenance of the welded construction serviceability over normative service life is provided by the maintenance system on the basis of the guiding documents according to the concept of "fail safe". However, technology factors relating to welding such as high residual stresses and significant plastic strains are not considered in the guiding documents. The design procedure of the stressed-strained state of welded constructions is suggested in the paper. The procedure investigates welded constructions during welding and the external load using the program ANSYS. In this paper, the model of influence of the residual stress strain state on the factor of stress intensity is proposed. The calculation method of the residual stressed-strained state (SSS) taking into account the phase transition is developed by the authors. Melting and hardening of a plate material during heating and cooling is considered. The thermomechanical problem of heating a plate by a stationary heat source is solved. The setup of the heating spot center on distance (190 mm) from the crack top in a direction of its propagation leads to the fact that the value of total factor of stress intensity will considerably decrease under action of the specified residual compressing stresses. It can lower the speed of the crack propagation to zero. The suggested method of survivability maintenance can be applied during operation with the purpose of increasing the service life of metal constructions up to running repair of technological machines.
Licht, Heather; Murray, Mark; Vassaur, John; Jupiter, Daniel C; Regner, Justin L; Chaput, Christopher D
2015-11-18
With the rise of obesity in the American population, there has been a proportionate increase of obesity in the trauma population. The purpose of this study was to use a computed tomography-based measurement of adiposity to determine if obesity is associated with an increased burden to the health-care system in patients with orthopaedic polytrauma. A prospective comprehensive trauma database at a level-I trauma center was utilized to identify 301 patients with polytrauma who had orthopaedic injuries and intensive care unit admission from 2006 to 2011. Routine thoracoabdominal computed tomographic scans allowed for measurement of the truncal adiposity volume. The truncal three-dimensional reconstruction body mass index was calculated from the computed tomography-based volumes based on a previously validated algorithm. A truncal three-dimensional reconstruction body mass index of <30 kg/m(2) denoted non-obese patients and ≥ 30 kg/m(2) denoted obese patients. The need for orthopaedic surgical procedure, in-hospital mortality, length of stay, hospital charges, and discharge disposition were compared between the two groups. Of the 301 patients, 21.6% were classified as obese (truncal three-dimensional reconstruction body mass index of ≥ 30 kg/m(2)). Higher truncal three-dimensional reconstruction body mass index was associated with longer hospital length of stay (p = 0.02), more days spent in the intensive care unit (p = 0.03), more frequent discharge to a long-term care facility (p < 0.0002), higher rate of orthopaedic surgical intervention (p < 0.01), and increased total hospital charges (p < 0.001). Computed tomographic scans, routinely obtained at the time of admission, can be utilized to calculate truncal adiposity and to investigate the impact of obesity on patients with polytrauma. Obese patients were found to have higher total hospital charges, longer hospital stays, discharge to a continuing-care facility, and a higher rate of orthopaedic surgical intervention. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.
Mobile computing device configured to compute irradiance, glint, and glare of the sun
Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib
2014-03-11
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.
Wunsch, Annabel; Philippot, Pierre; Plaghki, Léon
2003-03-01
The present experiment examined the possibility to change the sensory and/or the affective perception of thermal stimuli by an emotional associative learning procedure known to operate without participants' awareness (evaluative conditioning). In a mixed design, an aversive conditioning procedure was compared between subjects to an appetitive conditioning procedure. Both groups were also compared within-subject to a control condition (neutral conditioning). The aversive conditioning was induced by associating non-painful and painful thermal stimuli - delivered on the right forearm - with unpleasant slides. The appetitive conditioning consisted in an association between thermal stimuli - also delivered on the right forearm - and pleasant slides. The control condition consisted in an association between thermal stimuli - delivered for all participants on the left forearm - and neutral slides. The effects of the conditioning procedures on the sensory and affective dimensions were evaluated with visual analogue scale (VAS)-intensity and VAS-unpleasantness. Startle reflex was used as a physiological index of emotional valence disposition. Results confirmed that no participants were aware of the conditioning procedure. After unpleasant slides (aversive conditioning), non-painful and painful thermal stimuli were judged more intense and more unpleasant than when preceded by neutral slides (control condition) or pleasant slides (appetitive conditioning). Despite a strong correlation between the intensity and the unpleasantness scales, effects were weaker for the affective scale and, became statistically non-significant when VAS-intensity was used as covariate. This experiment shows that it is possible to modify the perception of intensity of thermal stimuli by a non-conscious learning procedure based on the transfer of the valence of the unconditioned stimuli (pleasant or unpleasant slides) towards the conditioned stimuli (non-painful and painful thermal stimuli). These results plead for a conception of pain as a conscious output of complex informational processes all of which are not accessible to participants' awareness. Mechanisms by which affective input may influence sensory experience and clinical implications of the present study are discussed.
Work intensity in sacroiliac joint fusion and lumbar microdiscectomy
Frank, Clay; Kondrashov, Dimitriy; Meyer, S Craig; Dix, Gary; Lorio, Morgan; Kovalsky, Don; Cher, Daniel
2016-01-01
Background The evidence base supporting minimally invasive sacroiliac (SI) joint fusion (SIJF) surgery is increasing. The work relative value units (RVUs) associated with minimally invasive SIJF are seemingly low. To date, only one published study describes the relative work intensity associated with minimally invasive SIJF. No study has compared work intensity vs other commonly performed spine surgery procedures. Methods Charts of 192 patients at five sites who underwent either minimally invasive SIJF (American Medical Association [AMA] CPT® code 27279) or lumbar microdiscectomy (AMA CPT® code 63030) were reviewed. Abstracted were preoperative times associated with diagnosis and patient care, intraoperative parameters including operating room (OR) in/out times and procedure start/stop times, and postoperative care requirements. Additionally, using a visual analog scale, surgeons estimated the intensity of intraoperative care, including mental, temporal, and physical demands and effort and frustration. Work was defined as operative time multiplied by task intensity. Results Patients who underwent minimally invasive SIJF were more likely female. Mean procedure times were lower in SIJF by about 27.8 minutes (P<0.0001) and mean total OR times were lower by 27.9 minutes (P<0.0001), but there was substantial overlap across procedures. Mean preservice and post-service total labor times were longer in minimally invasive SIJF (preservice times longer by 63.5 minutes [P<0.0001] and post-service labor times longer by 20.2 minutes [P<0.0001]). The number of postoperative visits was higher in minimally invasive SIJF. Mean total service time (preoperative + OR time + postoperative) was higher in the minimally invasive SIJF group (261.5 vs 211.9 minutes, P<0.0001). Intraoperative intensity levels were higher for mental, physical, effort, and frustration domains (P<0.0001 each). After taking into account intensity, intraoperative workloads showed substantial overlap. Conclusion Compared to a commonly performed lumbar spine surgical procedure, lumbar microdiscectomy, that currently has a higher work RVU, preoperative, intraoperative, and postoperative workload for minimally invasive SIJF is higher. The work RVU for minimally invasive SIJF should be adjusted upward as the relative amount of work is comparable. PMID:27555790
Williams, Eric
2004-11-15
The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.
The computational structural mechanics testbed procedures manual
NASA Technical Reports Server (NTRS)
Stewart, Caroline B. (Compiler)
1991-01-01
The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.
ERIC Educational Resources Information Center
Reggini, Horacio C.
The first article, "LOGO and von Neumann Ideas," deals with the creation of new procedures based on procedures defined and stored in memory as LOGO lists of lists. This representation, which enables LOGO procedures to construct, modify, and run other LOGO procedures, is compared with basic computer concepts first formulated by John von…
The rid-redundant procedure in C-Prolog
NASA Technical Reports Server (NTRS)
Chen, Huo-Yan; Wah, Benjamin W.
1987-01-01
C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.
ERIC Educational Resources Information Center
Chapman, Dane M.; And Others
Three critical procedural skills in emergency medicine were evaluated using three assessment modalities--written, computer, and animal model. The effects of computer practice and previous procedure experience on skill competence were also examined in an experimental sequential assessment design. Subjects were six medical students, six residents,…
An Approach to Remove the Systematic Bias from the Storm Surge forecasts in the Venice Lagoon
NASA Astrophysics Data System (ADS)
Canestrelli, A.
2017-12-01
In this work a novel approach is proposed for removing the systematic bias from the storm surge forecast computed by a two-dimensional shallow-water model. The model covers both the Adriatic and Mediterranean seas and provides the forecast at the entrance of the Venice Lagoon. The wind drag coefficient at the water-air interface is treated as a calibration parameter, with a different value for each range of wind velocities and wind directions. This sums up to a total of 16-64 parameters to be calibrated, depending on the chosen resolution. The best set of parameters is determined by means of an optimization procedure, which minimizes the RMS error between measured and modeled water level in Venice for the period 2011-2015. It is shown that a bias is present, for which the peaks of wind velocities provided by the weather forecast are largely underestimated, and that the calibration procedure removes this bias. When the calibrated model is used to reproduce events not included in the calibration dataset, the forecast error is strongly reduced, thus confirming the quality of our procedure. The proposed approach it is not site-specific and could be applied to different situations, such as storm surges caused by intense hurricanes.
Suenaga, Hideyuki; Taniguchi, Asako; Yonenaga, Kazumichi; Hoshi, Kazuto; Takato, Tsuyoshi
2016-01-01
Computer-assisted preoperative simulation surgery is employed to plan and interact with the 3D images during the orthognathic procedure. It is useful for positioning and fixation of maxilla by a plate. We report a case of maxillary retrusion by a bilateral cleft lip and palate, in which a 2-stage orthognathic procedure (maxillary advancement by distraction technique and mandibular setback surgery) was performed following a computer-assisted preoperative simulation planning to achieve the positioning and fixation of the plate. A high accuracy was achieved in the present case. A 21-year-old male patient presented to our department with a complaint of maxillary retrusion following bilateral cleft lip and palate. Computer-assisted preoperative simulation with 2-stage orthognathic procedure using distraction technique and mandibular setback surgery was planned. The preoperative planning of the procedure resulted in good aesthetic outcomes. The error of the maxillary position was less than 1mm. The implementation of the computer-assisted preoperative simulation for the positioning and fixation of plate in 2-stage orthognathic procedure using distraction technique and mandibular setback surgery yielded good results. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
2008-02-27
between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be
NASA Technical Reports Server (NTRS)
Anderson, O. L.
1974-01-01
A finite-difference procedure for computing the turbulent, swirling, compressible flow in axisymmetric ducts is described. Arbitrary distributions of heat and mass transfer at the boundaries can be treated, and the effects of struts, inlet guide vanes, and flow straightening vanes can be calculated. The calculation procedure is programmed in FORTRAN 4 and has operated successfully on the UNIVAC 1108, IBM 360, and CDC 6600 computers. The analysis which forms the basis of the procedure, a detailed description of the computer program, and the input/output formats are presented. The results of sample calculations performed with the computer program are compared with experimental data.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
Santolaria, Pilar; Pauciullo, Alfredo; Silvestre, Miguel A; Vicente-Fiel, Sandra; Villanova, Leyre; Pinton, Alain; Viruel, Juan; Sales, Ester; Yániz, Jesús L
2016-01-01
This study was designed to determine the ability of computer-assisted sperm morphometry analysis (CASA-Morph) with fluorescence to discriminate between spermatozoa carrying different sex chromosomes from the nuclear morphometrics generated and different statistical procedures in the bovine species. The study was divided into two experiments. The first was to study the morphometric differences between X- and Y-chromosome-bearing spermatozoa (SX and SY, respectively). Spermatozoa from eight bulls were processed to assess simultaneously the sex chromosome by FISH and sperm morphometry by fluorescence-based CASA-Morph. SX cells were larger than SY cells on average (P < 0.001) although with important differences between bulls. A simultaneous evaluation of all the measured features by discriminant analysis revealed that nuclear area and average fluorescence intensity were the variables selected by stepwise discriminant function analysis as the best discriminators between SX and SY. In the second experiment, the sperm nuclear morphometric results from CASA-Morph in nonsexed (mixed SX and SY) and sexed (SX) semen samples from four bulls were compared. FISH allowed a successful classification of spermatozoa according to their sex chromosome content. X-sexed spermatozoa displayed a larger size and fluorescence intensity than nonsexed spermatozoa (P < 0.05). We conclude that the CASA-Morph fluorescence-based method has the potential to find differences between X- and Y-chromosome-bearing spermatozoa in bovine species although more studies are needed to increase the precision of sex determination by this technique.
Safe electrode trajectory planning in SEEG via MIP-based vessel segmentation
NASA Astrophysics Data System (ADS)
Scorza, Davide; Moccia, Sara; De Luca, Giuseppe; Plaino, Lisa; Cardinale, Francesco; Mattos, Leonardo S.; Kabongo, Luis; De Momi, Elena
2017-03-01
Stereo-ElectroEncephaloGraphy (SEEG) is a surgical procedure that allows brain exploration of patients affected by focal epilepsy by placing intra-cerebral multi-lead electrodes. The electrode trajectory planning is challenging and time consuming. Various constraints have to be taken into account simultaneously, such as absence of vessels at the electrode Entry Point (EP), where bleeding is more likely to occur. In this paper, we propose a novel framework to help clinicians in defining a safe trajectory and focus our attention on EP. For each electrode, a Maximum Intensity Projection (MIP) image was obtained from Computer Tomography Angiography (CTA) slices of the brain first centimeter measured along the electrode trajectory. A Gaussian Mixture Model (GMM), modified to include neighborhood prior through Markov Random Fields (GMM-MRF), is used to robustly segment vessels and deal with the noisy nature of MIP images. Results are compared with simple GMM and manual global Thresholding (Th) by computing sensitivity, specificity, accuracy and Dice similarity index against manual segmentation performed under the supervision of an expert surgeon. In this work we present a novel framework which can be easily integrated into manual and automatic planner to help surgeon during the planning phase. GMM-MRF qualitatively showed better performance over GMM in reproducing the connected nature of brain vessels also in presence of noise and image intensity drops typical of MIP images. With respect Th, it is a completely automatic method and it is not influenced by inter-subject variability.
Alternative Computer Access for Young Handicapped Children: A Systematic Selection Procedure.
ERIC Educational Resources Information Center
Morris, Karen J.
The paper describes the type of computer access products appropriate for use by handicapped children and presents a systematic procedure for selection of such input and output devices. Modification of computer input is accomplished by three strategies: modifying the keyboard, adding alternative keyboards, and attaching switches to the keyboard.…
ERIC Educational Resources Information Center
Komsky, Susan
Fiscal Impact Budgeting Systems (FIBS) are sophisticated computer based modeling procedures used in local government organizations, whose results, however, are often overlooked or ignored by decision makers. A study attempted to discover the reasons for this situation by focusing on four factors: potential usefulness, faith in computers,…
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
Time-variant analysis of rotorcraft systems dynamics - An exploitation of vector processors
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Xie, M.; Shareef, N. H.
1993-01-01
In this paper a generalized algorithmic procedure is presented for handling constraints in mechanical transmissions. The latter are treated as multibody systems of interconnected rigid/flexible bodies. The constraint Jacobian matrices are generated automatically and suitably updated in time, depending on the geometrical and kinematical constraint conditions describing the interconnection between shafts or gears. The type of constraints are classified based on the interconnection of the bodies by assuming that one or more points of contact exist between them. The effects due to elastic deformation of the flexible bodies are included by allowing each body element to undergo small deformations. The procedure is based on recursively formulated Kane's dynamical equations of motion and the finite element method, including the concept of geometrical stiffening effects. The method is implemented on an IBM-3090-600j vector processor with pipe-lining capabilities. A significant increase in the speed of execution is achieved by vectorizing the developed code in computationally intensive areas. An example consisting of two meshing disks rotating at high angular velocity is presented. Applications are intended for the study of the dynamic behavior of helicopter transmissions.
Hierarchical Model for the Analysis of Scattering Data of Complex Materials
Oyedele, Akinola; Mcnutt, Nicholas W.; Rios, Orlando; ...
2016-05-16
Interpreting the results of scattering data for complex materials with a hierarchical structure in which at least one phase is amorphous presents a significant challenge. Often the interpretation relies on the use of large-scale molecular dynamics (MD) simulations, in which a structure is hypothesized and from which a radial distribution function (RDF) can be extracted and directly compared against an experimental RDF. This computationally intensive approach presents a bottleneck in the efficient characterization of the atomic structure of new materials. Here, we propose and demonstrate an approach for a hierarchical decomposition of the RDF in which MD simulations are replacedmore » by a combination of tractable models and theory at the atomic scale and the mesoscale, which when combined yield the RDF. We apply the procedure to a carbon composite, in which graphitic nanocrystallites are distributed in an amorphous domain. We compare the model with the RDF from both MD simulation and neutron scattering data. Ultimately, this procedure is applicable for understanding the fundamental processing-structure-property relationships in complex magnetic materials.« less
The Behavior Pain Assessment Tool for critically ill adults: a validation study in 28 countries.
Gélinas, Céline; Puntillo, Kathleen A; Levin, Pavel; Azoulay, Elie
2017-05-01
Many critically ill adults are unable to communicate their pain through self-report. The study purpose was to validate the use of the 8-item Behavior Pain Assessment Tool (BPAT) in patients hospitalized in 192 intensive care units from 28 countries. A total of 4812 procedures in 3851 patients were included in data analysis. Patients were assessed with the BPAT before and during procedures by 2 different raters (mostly nurses and physicians). Those who were able to self-report were asked to rate their pain intensity and pain distress on 0 to 10 numeric rating scales. Interrater reliability of behavioral observations was supported by moderate (0.43-0.60) to excellent (>0.60) kappa coefficients. Mixed effects multilevel logistic regression models showed that most behaviors were more likely to be present during the procedure than before and in less sedated patients, demonstrating discriminant validation of the tool use. Regarding criterion validation, moderate positive correlations were found during procedures between the mean BPAT scores and the mean pain intensity (r = 0.54) and pain distress (r = 0.49) scores (P < 0.001). Regression models showed that all behaviors were significant predictors of pain intensity and pain distress, accounting for 35% and 29% of their total variance, respectively. A BPAT cut-point score >3.5 could classify patients with or without severe levels (≥8) of pain intensity and distress with sensitivity and specificity findings ranging from 61.8% to 75.1%. The BPAT was found to be reliable and valid. Its feasibility for use in practice and the effect of its clinical implementation on patient pain and intensive care unit outcomes need further research.
Simulation training tools for nonlethal weapons using gaming environments
NASA Astrophysics Data System (ADS)
Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry
2006-05-01
Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.
Kalkan, Erol; Kwong, Neal S.
2010-01-01
The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground-motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case for the central United States), or when high-intensity records are needed (as is the case for San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure recently was developed to determine scale factors for a small number of records, such that the scaled records provide accurate and efficient estimates of 'true' median structural responses. The adjective 'accurate' refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective 'efficient' refers to the record-to-record variability of responses. Herein, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing 'ordinary standard' bridges typical of reinforced-concrete bridge construction in California. These bridges are the single-bent overpass, multi span bridge, curved-bridge, and skew-bridge. As compared to benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the responses. Thus, the MPS procedure is a useful tool for scaling ground motions as input to nonlinear RHAs of 'ordinary standard' bridges.
Service Request Password Help New Users Back to HEP Computing Mail-Migration Procedure on Linux Mail -Migration Procedure on Windows How to Migrate a Folder to GMail using Pine U.S. Department of Energy The
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.
1980-01-01
Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.
Circus: A Replicated Procedure Call Facility
1984-08-01
Computer Science Laboratory, Xerox PARC, July 1082 . [24) Bruce Ja.y Nelson. Remote Procedure Ctdl. Ph.D. dissertation, Computer Science Department...t. Ph.D. dissertation, Computer Science Division, University of California, Berkeley, Xerox PARC report number CSIF 82-7, December 1082 . [30...Tandem Computers Inc. GUARDIAN Opet’ating Sy•tem Programming Mt~nulll, Volumu 1 11nd 2. C upertino, California, 1082 . [31) R. H. Thoma.s. A majority
Parallel computer processing and modeling: applications for the ICU
NASA Astrophysics Data System (ADS)
Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.
2003-07-01
Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.
Procedures for numerical analysis of circadian rhythms
REFINETTI, ROBERTO; LISSEN, GERMAINE CORNÉ; HALBERG, FRANZ
2010-01-01
This article reviews various procedures used in the analysis of circadian rhythms at the populational, organismal, cellular and molecular levels. The procedures range from visual inspection of time plots and actograms to several mathematical methods of time series analysis. Computational steps are described in some detail, and additional bibliographic resources and computer programs are listed. PMID:23710111
Computer program for calculating the flow field of supersonic ejector nozzles
NASA Technical Reports Server (NTRS)
Anderson, B. H.
1974-01-01
An analytical procedure for computing the performance of supersonic ejector nozzles is presented. This procedure includes real sonic line effects and an interaction analysis for the mixing process between the two streams. The procedure is programmed in FORTRAN 4 and has operated successfully on IBM 7094, IBM 360, CDC 6600, and Univac 1108.
Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars
2014-12-06
Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.
Jiménez-Brenes, F M; López-Granados, F; de Castro, A I; Torres-Sánchez, J; Serrano, N; Peña, J M
2017-01-01
Tree pruning is a costly practice with important implications for crop harvest and nutrition, pest and disease control, soil protection and irrigation strategies. Investigations on tree pruning usually involve tedious on-ground measurements of the primary tree crown dimensions, which also might generate inconsistent results due to the irregular geometry of the trees. As an alternative to intensive field-work, this study shows a innovative procedure based on combining unmanned aerial vehicle (UAV) technology and advanced object-based image analysis (OBIA) methodology for multi-temporal three-dimensional (3D) monitoring of hundreds of olive trees that were pruned with three different strategies (traditional, adapted and mechanical pruning). The UAV images were collected before pruning, after pruning and a year after pruning, and the impacts of each pruning treatment on the projected canopy area, tree height and crown volume of every tree were quantified and analyzed over time. The full procedure described here automatically identified every olive tree on the orchard and computed their primary 3D dimensions on the three study dates with high accuracy in the most cases. Adapted pruning was generally the most aggressive treatment in terms of the area and volume (the trees decreased by 38.95 and 42.05% on average, respectively), followed by trees under traditional pruning (33.02 and 35.72% on average, respectively). Regarding the tree heights, mechanical pruning produced a greater decrease (12.15%), and these values were minimal for the other two treatments. The tree growth over one year was affected by the pruning severity and by the type of pruning treatment, i.e., the adapted-pruning trees experienced higher growth than the trees from the other two treatments when pruning intensity was low (<10%), similar to the traditionally pruned trees at moderate intensity (10-30%), and lower than the other trees when the pruning intensity was higher than 30% of the crown volume. Combining UAV-based images and an OBIA procedure allowed measuring tree dimensions and quantifying the impacts of three different pruning treatments on hundreds of trees with minimal field work. Tree foliage losses and annual canopy growth showed different trends as affected by the type and severity of the pruning treatments. Additionally, this technology offers valuable geo-spatial information for designing site-specific crop management strategies in the context of precision agriculture, with the consequent economic and environmental benefits.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Hierarchical Parallelization of Gene Differential Association Analysis
2011-01-01
Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. PMID:21936916
Hierarchical parallelization of gene differential association analysis.
Needham, Mark; Hu, Rui; Dwarkadas, Sandhya; Qiu, Xing
2011-09-21
Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.
NASA Astrophysics Data System (ADS)
Barlow, Steven J.
1986-09-01
The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.
A computational procedure for large rotational motions in multibody dynamics
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1987-01-01
A computational procedure suitable for the solution of equations of motion for multibody systems is presented. The present procedure adopts a differential partitioning of the translational motions and the rotational motions. The translational equations of motion are then treated by either a conventional explicit or an implicit direct integration method. A principle feature of this procedure is a nonlinearly implicit algorithm for updating rotations via the Euler four-parameter representation. This procedure is applied to the rolling of a sphere through a specific trajectory, which shows that it yields robust solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiev, R.Z.; Islamgaliev, R.K.; Kuzmina, N.F.
Intense plastic straining techniques such as torsion straining and equal channel angular (ECA) pressing are processing procedures which may be used to make beneficial changes in the properties of materials through a substantial refinement in the microstructure. Although intense plastic straining procedures have been used for grain refinement in numerous experiments reported over the last decade, there appears to have been no investigations in which these procedures were used with metal matrix composites. The present paper describes a series of experiments in which torsion straining and ECA pressing were applied to an Al-6061 metal matrix composite reinforced with 10 volumemore » % of Al{sub 2}O{sub 3} particulates. As will be demonstrated, intense plastic straining has the potential for both reducing the grain size of the composite to the submicrometer level and increasing the strength at room temperature by a factor in the range of {approximately}2 to {approximately}3.« less
NASA Technical Reports Server (NTRS)
Rogallo, Vernon L; Yaggy, Paul F; Mccloud, John L , III
1956-01-01
A simplified procedure is shown for calculating the once-per-revolution oscillating aerodynamic thrust loads on propellers of tractor airplanes at zero yaw. The only flow field information required for the application of the procedure is a knowledge of the upflow angles at the horizontal center line of the propeller disk. Methods are presented whereby these angles may be computed without recourse to experimental survey of the flow field. The loads computed by the simplified procedure are compared with those computed by a more rigorous method and the procedure is applied to several airplane configurations which are believed typical of current designs. The results are generally satisfactory.
Automatic aortic root segmentation in CTA whole-body dataset
NASA Astrophysics Data System (ADS)
Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.
2016-03-01
Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.
Quantitative fluorescence correlation spectroscopy on DNA in living cells
NASA Astrophysics Data System (ADS)
Hodges, Cameron; Kafle, Rudra P.; Meiners, Jens-Christian
2017-02-01
FCS is a fluorescence technique conventionally used to study the kinetics of fluorescent molecules in a dilute solution. Being a non-invasive technique, it is now drawing increasing interest for the study of more complex systems like the dynamics of DNA or proteins in living cells. Unlike an ordinary dye solution, the dynamics of macromolecules like proteins or entangled DNA in crowded environments is often slow and subdiffusive in nature. This in turn leads to longer residence times of the attached fluorophores in the excitation volume of the microscope and artifacts from photobleaching abound that can easily obscure the signature of the molecular dynamics of interest and make quantitative analysis challenging.We discuss methods and procedures to make FCS applicable to quantitative studies of the dynamics of DNA in live prokaryotic and eukaryotic cells. The intensity autocorrelation is computed function from weighted arrival times of the photons on the detector that maximizes the information content while simultaneously correcting for the effect of photobleaching to yield an autocorrelation function that reflects only the underlying dynamics of the sample. This autocorrelation function in turn is used to calculate the mean square displacement of the fluorophores attached to DNA. The displacement data is more amenable to further quantitative analysis than the raw correlation functions. By using a suitable integral transform of the mean square displacement, we can then determine the viscoelastic moduli of the DNA in its cellular environment. The entire analysis procedure is extensively calibrated and validated using model systems and computational simulations.
Computational Methods Development at Ames
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Smith, Charles A. (Technical Monitor)
1998-01-01
This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.
The purpose of this SOP is to define the procedures for the initial and periodic verification and validation of computer programs. The programs are used during the Arizona NHEXAS project and Border study at the Illinois Institute of Technology (IIT) site. Keywords: computers; s...
The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the "Border" study. Keywords: Computers; Software; QA/QC.
The National Human Exposure Assessment Sur...
Differences in muscle load between computer and non-computer work among office workers.
Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A
2009-12-01
Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.
Kernel and System Procedures in Flex.
1983-08-01
System procedures on which the operating system for the Flex computer is based. These are the low level rOCedures Whbich are used to implement the compilers, file-store* coummand interpreters etc on Flex. 168 ... System procedures on which the operating system for the Flex computer is based. These are the low level procedures which are used to implement the...privileged mode. They form the interface between the user and a particular operating system written on top of the Kernel.
NASA Technical Reports Server (NTRS)
Trosset, Michael W.
1999-01-01
Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.
Self-organizing maps for learning the edit costs in graph matching.
Neuhaus, Michel; Bunke, Horst
2005-06-01
Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively.
NASA Astrophysics Data System (ADS)
Cheong, Youjin; Kim, Young Jin; Kang, Heeyoon; Choi, Samjin; Lee, Hee Joo
2017-08-01
Although many methodologies have been developed to identify unknown bacteria, bacterial identification in clinical microbiology remains a complex and time-consuming procedure. To address this problem, we developed a label-free method for rapidly identifying clinically relevant multilocus sequencing typing-verified quinolone-resistant Klebsiella pneumoniae strains. We also applied the method to identify three strains from colony samples, ATCC70063 (control), ST11 and ST15; these are the prevalent quinolone-resistant K. pneumoniae strains in East Asia. The colonies were identified using a drop-coating deposition surface-enhanced Raman scattering (DCD-SERS) procedure coupled with a multivariate statistical method. Our workflow exhibited an enhancement factor of 11.3 × 106 to Raman intensities, high reproducibility (relative standard deviation of 7.4%), and a sensitive limit of detection (100 pM rhodamine 6G), with a correlation coefficient of 0.98. All quinolone-resistant K. pneumoniae strains showed similar spectral Raman shifts (high correlations) regardless of bacterial type, as well as different Raman vibrational modes compared to Escherichia coli strains. Our proposed DCD-SERS procedure coupled with the multivariate statistics-based identification method achieved excellent performance in discriminating similar microbes from one another and also in subtyping of K. pneumoniae strains. Therefore, our label-free DCD-SERS procedure coupled with the computational decision supporting method is a potentially useful method for the rapid identification of clinically relevant K. pneumoniae strains.
Inverse boundary-layer theory and comparison with experiment
NASA Technical Reports Server (NTRS)
Carter, J. E.
1978-01-01
Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.
Motion magnification for endoscopic surgery
NASA Astrophysics Data System (ADS)
McLeod, A. Jonathan; Baxter, John S. H.; de Ribaupierre, Sandrine; Peters, Terry M.
2014-03-01
Endoscopic and laparoscopic surgeries are used for many minimally invasive procedures but limit the visual and haptic feedback available to the surgeon. This can make vessel sparing procedures particularly challenging to perform. Previous approaches have focused on hardware intensive intraoperative imaging or augmented reality systems that are difficult to integrate into the operating room. This paper presents a simple approach in which motion is visually enhanced in the endoscopic video to reveal pulsating arteries. This is accomplished by amplifying subtle, periodic changes in intensity coinciding with the patient's pulse. This method is then applied to two procedures to illustrate its potential. The first, endoscopic third ventriculostomy, is a neurosurgical procedure where the floor of the third ventricle must be fenestrated without injury to the basilar artery. The second, nerve-sparing robotic prostatectomy, involves removing the prostate while limiting damage to the neurovascular bundles. In both procedures, motion magnification can enhance subtle pulsation in these structures to aid in identifying and avoiding them.
A feasibility study on porting the community land model onto accelerators using OpenACC
Wang, Dali; Wu, Wei; Winkler, Frank; ...
2014-01-01
As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less
Computer-based System for the Virtual-Endoscopic Guidance of Bronchoscopy.
Helferty, J P; Sherbondy, A J; Kiraly, A P; Higgins, W E
2007-11-01
The standard procedure for diagnosing lung cancer involves two stages: three-dimensional (3D) computed-tomography (CT) image assessment, followed by interventional bronchoscopy. In general, the physician has no link between the 3D CT image assessment results and the follow-on bronchoscopy. Thus, the physician essentially performs bronchoscopic biopsy of suspect cancer sites blindly. We have devised a computer-based system that greatly augments the physician's vision during bronchoscopy. The system uses techniques from computer graphics and computer vision to enable detailed 3D CT procedure planning and follow-on image-guided bronchoscopy. The procedure plan is directly linked to the bronchoscope procedure, through a live registration and fusion of the 3D CT data and bronchoscopic video. During a procedure, the system provides many visual tools, fused CT-video data, and quantitative distance measures; this gives the physician considerable visual feedback on how to maneuver the bronchoscope and where to insert the biopsy needle. Central to the system is a CT-video registration technique, based on normalized mutual information. Several sets of results verify the efficacy of the registration technique. In addition, we present a series of test results for the complete system for phantoms, animals, and human lung-cancer patients. The results indicate that not only is the variation in skill level between different physicians greatly reduced by the system over the standard procedure, but that biopsy effectiveness increases.
Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon
2015-11-03
Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
Rapid prototyping--when virtual meets reality.
Beguma, Zubeda; Chhedat, Pratik
2014-01-01
Rapid prototyping (RP) describes the customized production of solid models using 3D computer data. Over the past decade, advances in RP have continued to evolve, resulting in the development of new techniques that have been applied to the fabrication of various prostheses. RP fabrication technologies include stereolithography (SLA), fused deposition modeling (FDM), computer numerical controlled (CNC) milling, and, more recently, selective laser sintering (SLS). The applications of RP techniques for dentistry include wax pattern fabrication for dental prostheses, dental (facial) prostheses mold (shell) fabrication, and removable dental prostheses framework fabrication. In the past, a physical plastic shape of the removable partial denture (RPD) framework was produced using an RP machine, and then used as a sacrificial pattern. Yet with the advent of the selective laser melting (SLM) technique, RPD metal frameworks can be directly fabricated, thereby omitting the casting stage. This new approach can also generate the wax pattern for facial prostheses directly, thereby reducing labor-intensive laboratory procedures. Many people stand to benefit from these new RP techniques for producing various forms of dental prostheses, which in the near future could transform traditional prosthodontic practices.
A test procedure for determining the influence of stress ratio on fatigue crack growth
NASA Technical Reports Server (NTRS)
Fitzgerald, J. H.; Wei, R. P.
1974-01-01
A test procedure is outlined by which the rate of fatigue crack growth over a range of stress ratios and stress intensities can be determined expeditiously using a small number of specimens. This procedure was developed to avoid or circumvent the effects of load interactions on fatigue crack growth, and was used to develop data on a mill annealed Ti-6Al-4V alloy plate. Experimental data suggest that the rates of fatigue crack growth among the various stress ratios may be correlated in terms of an effective stress intensity range at given values of K max. This procedure is not to be used, however, for determining the corrosion fatigue crack growth characteristics of alloys when nonsteady-state effects are significant.
Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner
2017-11-01
Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.
Intensive Outpatient Behavioral Treatment of Primary Urinary Incontinence of Children With Autism
ERIC Educational Resources Information Center
LeBlanc, Linda A.; Carr, James E.; Crossett, Sarah E.; Bennett, Christine M.; Detweiler, Dawn D.
2005-01-01
Three children with autism who were previously nonresponsive to low-intensity toilet training interventions were toilet trained using a modified Azrin and Foxx (1971) intensive toilet training procedure. Effects were demonstrated using a nonconcurrent multiple baseline design across participants. The training was conducted across home and school…
The large-scale structure of software-intensive systems
Booch, Grady
2012-01-01
The computer metaphor is dominant in most discussions of neuroscience, but the semantics attached to that metaphor are often quite naive. Herein, we examine the ontology of software-intensive systems, the nature of their structure and the application of the computer metaphor to the metaphysical questions of self and causation. PMID:23386964
Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong
2014-01-01
The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656
On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.
Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça
2010-01-01
This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.
NASA Astrophysics Data System (ADS)
Perrin, A.; Ndao, M.; Manceron, L.
2017-10-01
A recent paper [1] presents a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The NDSD-1000 database contains line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) for numerous cold and hot bands of the 14N16O2 isotopomer of nitrogen dioxide. The parameters used for the line positions and intensities calculation were generated through a global modeling of experimental data collected in the literature within the framework of the method of effective operators. However, the form of the effective dipole moment operator used to compute the NO2 line intensities in the NDSD-1000 database differs from the classical one used for line intensities calculation in the NO2 infrared literature [12]. Using Fourier transform spectra recorded at high resolution in the 6.3 μm region, it is shown here, that the NDSD-1000 formulation is incorrect since the computed intensities do not account properly for the (Int(+)/Int(-)) intensity ratio between the (+) (J = N+ 1/2) and (-) (J = N-1/2) electron - spin rotation subcomponents of the computed vibration rotation transitions. On the other hand, in the HITRAN or GEISA spectroscopic databases, the NO2 line intensities were computed using the classical theoretical approach, and it is shown here that these data lead to a significant better agreement between the observed and calculated spectra.
Automated reconstruction of rainfall events responsible for shallow landslides
NASA Astrophysics Data System (ADS)
Vessia, G.; Parise, M.; Brunetti, M. T.; Peruccacci, S.; Rossi, M.; Vennari, C.; Guzzetti, F.
2014-04-01
Over the last 40 years, many contributions have been devoted to identifying the empirical rainfall thresholds (e.g. intensity vs. duration ID, cumulated rainfall vs. duration ED, cumulated rainfall vs. intensity EI) for the initiation of shallow landslides, based on local as well as worldwide inventories. Although different methods to trace the threshold curves have been proposed and discussed in literature, a systematic study to develop an automated procedure to select the rainfall event responsible for the landslide occurrence has rarely been addressed. Nonetheless, objective criteria for estimating the rainfall responsible for the landslide occurrence (effective rainfall) play a prominent role on the threshold values. In this paper, two criteria for the identification of the effective rainfall events are presented: (1) the first is based on the analysis of the time series of rainfall mean intensity values over one month preceding the landslide occurrence, and (2) the second on the analysis of the trend in the time function of the cumulated mean intensity series calculated from the rainfall records measured through rain gauges. The two criteria have been implemented in an automated procedure written in R language. A sample of 100 shallow landslides collected in Italy by the CNR-IRPI research group from 2002 to 2012 has been used to calibrate the proposed procedure. The cumulated rainfall E and duration D of rainfall events that triggered the documented landslides are calculated through the new procedure and are fitted with power law in the (D,E) diagram. The results are discussed by comparing the (D,E) pairs calculated by the automated procedure and the ones by the expert method.
Remote observing with NASA's Deep Space Network
NASA Astrophysics Data System (ADS)
Kuiper, T. B. H.; Majid, W. A.; Martinez, S.; Garcia-Miro, C.; Rizzo, J. R.
2012-09-01
The Deep Space Network (DSN) communicates with spacecraft as far away as the boundary between the Solar System and the interstellar medium. To make this possible, large sensitive antennas at Canberra, Australia, Goldstone, California, and Madrid, Spain, provide for constant communication with interplanetary missions. We describe the procedures for radioastronomical observations using this network. Remote access to science monitor and control computers by authorized observers is provided by two-factor authentication through a gateway at the Jet Propulsion Laboratory (JPL) in Pasadena. To make such observations practical, we have devised schemes based on SSH tunnels and distributed computing. At the very minimum, one can use SSH tunnels and VNC (Virtual Network Computing, a remote desktop software suite) to control the science hosts within the DSN Flight Operations network. In this way we have controlled up to three telescopes simultaneously. However, X-window updates can be slow and there are issues involving incompatible screen sizes and multi-screen displays. Consequently, we are now developing SSH tunnel-based schemes in which instrument control and monitoring, and intense data processing, are done on-site by the remote DSN hosts while data manipulation and graphical display are done at the observer's host. We describe our approaches to various challenges, our experience with what worked well and lessons learned, and directions for future development.
Quality assurance for respiratory care services: a computer-assisted program.
Elliott, C G
1993-01-01
At present, the principal advantage of computer-assisted quality assurance is the acquisition of quality assurance date without resource-consuming chart reviews. A surveillance program like the medical director's alert may reduce morbidity and mortality. Previous research suggests that inadequate oxygen therapy or failures in airway management are important causes of preventable deaths in hospitals. Furthermore, preventable deaths tend to occur among patients who have lower severity-of-illness scores and who are not in ICUs. Thus, surveillance of the entire hospital, as performed by the HIS medical director's alert, may significantly impact hospital mortality related to respiratory care. Future research should critically examine the potential of such computerized systems to favorably change the morbidity and mortality of hospitalized patients. The departments of respiratory care and medical informatics at LDS Hospital have developed a computer-assisted approach to quality assurance monitoring of respiratory care services. This system provides frequent and consistent samples of a variety of respiratory care data. The immediate needs of patients are addressed through a daily surveillance system (medical director's alert). The departmental quality assurance program utilizes a separate program that monitors clinical indicators of staff performance in terms of stated departmental policies and procedures (rate-based clinical indicators). The availability of an integrated patient database allows these functions to be performed without labor-intensive chart audits.
Post-processing of seismic parameter data based on valid seismic event determination
McEvilly, Thomas V.
1985-01-01
An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.
Decision making and preferences for acoustic signals in choice situations by female crickets.
Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias
2015-08-01
Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.
Computer Use in Research Exercises: Some Suggested Procedures for Undergraduate Political Science.
ERIC Educational Resources Information Center
Comer, John
1979-01-01
Describes some procedures designed to assist instructors in developing a research component using the computer. Benefits include development of research skills, kindling student interest in the field of political science, and recruitment potential. (Author/CK)
Software For Computer-Aided Design Of Control Systems
NASA Technical Reports Server (NTRS)
Wette, Matthew
1994-01-01
Computer Aided Engineering System (CAESY) software developed to provide means to evaluate methods for dealing with users' needs in computer-aided design of control systems. Interpreter program for performing engineering calculations. Incorporates features of both Ada and MATLAB. Designed to be flexible and powerful. Includes internally defined functions, procedures and provides for definition of functions and procedures by user. Written in C language.
The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the Border study. Keywords: Computers; Software; QA/QC.
The U.S.-Mexico Border Program is sponsored ...
Intensive educational course in allergy and immunology.
Elizalde, A; Perez, E E; Sriaroon, P; Nguyen, D; Lockey, R F; Dorsey, M J
2012-09-01
A one-day intensive educational course on allergy and immunology theory and diagnostic procedure significantly increased the competency of allergy and immunology fellows-in-training. © 2012 John Wiley & Sons A/S.
CATE 2016 Indonesia: Image Calibration, Intensity Calibration, and Drift Scan
NASA Astrophysics Data System (ADS)
Hare, H. S.; Kovac, S. A.; Jensen, L.; McKay, M. A.; Bosh, R.; Watson, Z.; Mitchell, A. M.; Penn, M. J.
2016-12-01
The citizen Continental America Telescopic Eclipse (CATE) experiment aims to provide equipment for 60 sites across the path of totality for the United States August 21st, 2017 total solar eclipse. The opportunity to gather ninety minutes of continuous images of the solar corona is unmatched by any other previous eclipse event. In March of 2016, 5 teams were sent to Indonesia to test CATE equipment and procedures on the March 9th, 2016 total solar eclipse. Also, a goal of the trip was practice and gathering data to use in testing data reduction methods. Of the five teams, four collected data. While in Indonesia, each group participated in community outreach in the location of their site. The 2016 eclipse allowed CATE to test the calibration techniques for the 2017 eclipse. Calibration dark current and flat field images were collected to remove variation across the cameras. Drift scan observations provided information to rotationally align the images from each site. These image's intensity values allowed for intensity calibration for each of the sites. A GPS at each site corrected for major computer errors in time measurement of images. Further refinement of these processes is required before the 2017 eclipse. This work was made possible through the NSO Training for the 2017 Citizen CATE Experiment funded by NASA (NASA NNX16AB92A).
Menezes, Pedro Monteiro; Cook, Timothy Wayne; Cavalini, Luciana Tricai
2016-01-01
To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies.
Implicit method for the computation of unsteady flows on unstructured grids
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Mavriplis, D. J.
1995-01-01
An implicit method for the computation of unsteady flows on unstructured grids is presented. Following a finite difference approximation for the time derivative, the resulting nonlinear system of equations is solved at each time step by using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Inviscid and viscous unsteady flows are computed to validate the procedure. The issue of the mass matrix which arises with vertex-centered finite volume schemes is addressed. The present formulation allows the mass matrix to be inverted indirectly. A mesh point movement and reconnection procedure is described that allows the grids to evolve with the motion of bodies. As an example of flow over bodies in relative motion, flow over a multi-element airfoil system undergoing deployment is computed.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
Direct Numerical Simulation of Automobile Cavity Tones
NASA Technical Reports Server (NTRS)
Kurbatskii, Konstantin; Tam, Christopher K. W.
2000-01-01
The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.
Increased Memory Load during Task Completion when Procedures Are Presented on Mobile Screens
ERIC Educational Resources Information Center
Byrd, Keena S.; Caldwell, Barrett S.
2011-01-01
The primary objective of this research was to compare procedure-based task performance using three common mobile screen sizes: ultra mobile personal computer (7 in./17.8 cm), personal data assistant (3.5 in./8.9 cm), and SmartPhone (2.8 in./7.1 cm). Subjects used these three screen sizes to view and execute a computer maintenance procedure.…
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
Ali, Syed Mashhood; Shamim, Shazia
2015-07-01
Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Simulating Quantile Models with Applications to Economics and Management
NASA Astrophysics Data System (ADS)
Machado, José A. F.
2010-05-01
The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.
Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars
2013-08-01
Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.
47 CFR 1.958 - Distance computation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958...
47 CFR 1.958 - Distance computation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958...
47 CFR 1.958 - Distance computation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958...
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.
1975-01-01
The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.
Reanalysis, compatibility and correlation in analysis of modified antenna structures
NASA Technical Reports Server (NTRS)
Levy, R.
1989-01-01
A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.
NASA Astrophysics Data System (ADS)
Stallard, A. P.; Smith, S. R.; Elya, J. L.
2016-12-01
The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex and likely prohibitively slow. Using the time tree model, one can specify a path from the root to the data which restricts resolutions to certain timeframes (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative.
Guedj, Romain; Danan, Claude; Daoud, Patrick; Zupan, Véronique; Renolleau, Sylvain; Zana, Elodie; Aizenfisz, Sophie; Lapillonne, Alexandre; de Saint Blanquat, Laure; Granier, Michèle; Durand, Philippe; Castela, Florence; Coursol, Anne; Hubert, Philippe; Cimerman, Patricia; Anand, K J S; Khoshnood, Babak; Carbajal, Ricardo
2014-02-20
To determine whether analgesic use for painful procedures performed in neonates in the neonatal intensive care unit (NICU) differs during nights and days and during each of the 6 h period of the day. Conducted as part of the prospective observational Epidemiology of Painful Procedures in Neonates study which was designed to collect in real time and around-the-clock bedside data on all painful or stressful procedures. 13 NICUs and paediatric intensive care units in the Paris Region, France. All 430 neonates admitted to the participating units during a 6-week period between September 2005 and January 2006. During the first 14 days of admission, data were collected on all painful procedures and analgesic therapy. The five most frequent procedures representing 38 012 of all 42 413 (90%) painful procedures were analysed. Observational study. We compared the use of specific analgesic for procedures performed during each of the 6 h period of a day: morning (7:00 to 12:59), afternoon, early night and late night and during daytime (morning+afternoon) and night-time (early night+late night). 7724 of 38 012 (20.3%) painful procedures were carried out with a specific analgesic treatment. For morning, afternoon, early night and late night, respectively, the use of analgesic was 25.8%, 18.9%, 18.3% and 18%. The relative reduction of analgesia was 18.3%, p<0.01, between daytime and night-time and 28.8%, p<0.001, between morning and the rest of the day. Parental presence, nurses on 8 h shifts and written protocols for analgesia were associated with a decrease in this difference. The substantial differences in the use of analgesics around-the-clock may be questioned on quality of care grounds.
Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception
2017-12-01
computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead
PNNL Data-Intensive Computing for a Smarter Energy Grid
Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria
2017-12-09
The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.
NASA Astrophysics Data System (ADS)
Brodyn, M. S.; Starkov, V. N.
2007-07-01
It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sullivan, T. L.
1974-01-01
An approximate computational procedure is described for the analysis of angleplied laminates with residual nonlinear strains. The procedure consists of a combination of linear composite mechanics and incremental linear laminate theory. The procedure accounts for initial nonlinear strains, unloading, and in-situ matrix orthotropic nonlinear behavior. The results obtained in applying the procedure to boron/aluminum angleplied laminates show that this is a convenient means to accurately predict the initial tangent properties of angleplied laminates in which the matrix has been strained nonlinearly by the lamination residual stresses. The procedure predicted initial tangent properties results which were in good agreement with measured data obtained from boron/aluminum angleplied laminates.
NASA Astrophysics Data System (ADS)
Rambaldi, Marcello; Filimonov, Vladimir; Lillo, Fabrizio
2018-03-01
Given a stationary point process, an intensity burst is defined as a short time period during which the number of counts is larger than the typical count rate. It might signal a local nonstationarity or the presence of an external perturbation to the system. In this paper we propose a procedure for the detection of intensity bursts within the Hawkes process framework. By using a model selection scheme we show that our procedure can be used to detect intensity bursts when both their occurrence time and their total number is unknown. Moreover, the initial time of the burst can be determined with a precision given by the typical interevent time. We apply our methodology to the midprice change in foreign exchange (FX) markets showing that these bursts are frequent and that only a relatively small fraction is associated with news arrival. We show lead-lag relations in intensity burst occurrence across different FX rates and we discuss their relation with price jumps.
NASA Technical Reports Server (NTRS)
Koppen, Sandra V.; Nguyen, Truong X.; Mielnik, John J.
2010-01-01
The NASA Langley Research Center's High Intensity Radiated Fields Laboratory has developed a capability based on the RTCA/DO-160F Section 20 guidelines for radiated electromagnetic susceptibility testing in reverberation chambers. Phase 1 of the test procedure utilizes mode-tuned stirrer techniques and E-field probe measurements to validate chamber uniformity, determines chamber loading effects, and defines a radiated susceptibility test process. The test procedure is segmented into numbered operations that are largely software controlled. This document is intended as a laboratory test reference and includes diagrams of test setups, equipment lists, as well as test results and analysis. Phase 2 of development is discussed.
Percutaneous dilatational versus conventional surgical tracheostomy in intensive care patients
Youssef, Tarek F.; Ahmed, Mohamed Rifaat; Saber, Aly
2011-01-01
Background: Tracheostomy is usually performed in patients with difficult weaning from mechanical ventilation or some catastrophic neurologic insult. Conventional tracheostomy involves dissection of the pretracheal tissues and insertion of the tracheostomy tube into the trachea under direct vision. Percutaneous dilatational tracheostomy is increasingly popular and has gained widespread acceptance in many intensive care unit and trauma centers. Aim: Aim of the study was to compare percutaneous dilatational tracheostomy versus conventional tracheostomy in intensive care patients. Patients and Methods: 64 critically ill patients admitted to intensive care unit subjected to tracheostomy and randomly divided into two groups; percutaneous dilatational tracheostomy and conventional tracheostomy. Results: Mean duration of the procedure was similar between the two procedures while the mean size of tracheostomy tube was smaller in percutaneous technique. In addition, the Lowest SpO2 during procedure, PaCO2 after operation and intra-operative bleeding for both groups were nearly similar without any statistically difference. Postoperative infection after 7 days seen to be statistically lowered and the length of scar tend to be smaller among PDT patients. Conclusion: PDT technique is effective and safe as CST with low incidence of post operative complication. PMID:22361497
Kay-Lambkin, Frances J; Baker, Amanda L; Lewin, Terry J; Carr, Vaughan J
2009-03-01
To evaluate computer- versus therapist-delivered psychological treatment for people with comorbid depression and alcohol/cannabis use problems. Randomized controlled trial. Community-based participants in the Hunter Region of New South Wales, Australia. Ninety-seven people with comorbid major depression and alcohol/cannabis misuse. All participants received a brief intervention (BI) for depressive symptoms and substance misuse, followed by random assignment to: no further treatment (BI alone); or nine sessions of motivational interviewing and cognitive behaviour therapy (intensive MI/CBT). Participants allocated to the intensive MI/CBT condition were selected at random to receive their treatment 'live' (i.e. delivered by a psychologist) or via a computer-based program (with brief weekly input from a psychologist). Depression, alcohol/cannabis use and hazardous substance use index scores measured at baseline, and 3, 6 and 12 months post-baseline assessment. (i) Depression responded better to intensive MI/CBT compared to BI alone, with 'live' treatment demonstrating a strong short-term beneficial effect which was matched by computer-based treatment at 12-month follow-up; (ii) problematic alcohol use responded well to BI alone and even better to the intensive MI/CBT intervention; (iii) intensive MI/CBT was significantly better than BI alone in reducing cannabis use and hazardous substance use, with computer-based therapy showing the largest treatment effect. Computer-based treatment, targeting both depression and substance use simultaneously, results in at least equivalent 12-month outcomes relative to a 'live' intervention. For clinicians treating people with comorbid depression and alcohol problems, BIs addressing both issues appear to be an appropriate and efficacious treatment option. Primary care of those with comorbid depression and cannabis use problems could involve computer-based integrated interventions for depression and cannabis use, with brief regular contact with the clinician to check on progress.
Shih, Pei-Cheng; Yang, Yea-Ru; Wang, Ray-Yau
2013-01-01
Memory impairment is commonly noted in stroke survivors, and can lead to delay of functional recovery. Exercise has been proved to improve memory in adult healthy subjects. Such beneficial effects are often suggested to relate to hippocampal synaptic plasticity, which is important for memory processing. Previous evidence showed that in normal rats, low intensity exercise can improve synaptic plasticity better than high intensity exercise. However, the effects of exercise intensities on hippocampal synaptic plasticity and spatial memory after brain ischemia remain unclear. In this study, we investigated such effects in brain ischemic rats. The middle cerebral artery occlusion (MCAO) procedure was used to induce brain ischemia. After the MCAO procedure, rats were randomly assigned to sedentary (Sed), low-intensity exercise (Low-Ex), or high-intensity exercise (High-Ex) group. Treadmill training began from the second day post MCAO procedure, 30 min/day for 14 consecutive days for the exercise groups. The Low-Ex group was trained at the speed of 8 m/min, while the High-Ex group at the speed of 20 m/min. The spatial memory, hippocampal brain-derived neurotrophic factor (BDNF), synapsin-I, postsynaptic density protein 95 (PSD-95), and dendritic structures were examined to document the effects. Serum corticosterone level was also quantified as stress marker. Our results showed the Low-Ex group, but not the High-Ex group, demonstrated better spatial memory performance than the Sed group. Dendritic complexity and the levels of BDNF and PSD-95 increased significantly only in the Low-Ex group as compared with the Sed group in bilateral hippocampus. Notably, increased level of corticosterone was found in the High-Ex group, implicating higher stress response. In conclusion, after brain ischemia, low intensity exercise may result in better synaptic plasticity and spatial memory performance than high intensity exercise; therefore, the intensity is suggested to be considered during exercise training.
Computations of total sediment discharge, Niobrara River near Cody, Nebraska
Colby, Bruce R.; Hembree, C.H.
1955-01-01
A natural chute in the Niobrara River near Cody, Nebr., constricts the flow of the river except at high stages to a narrow channel in which the turbulence is sufficient to suspend nearly the total sediment discharge. Because much of the flow originates in the sandhills area of Nebraska, the water discharge and sediment discharge are relatively uniform. Sediment discharges based on depth-integrated samples at a contracted section in the chute and on streamflow records at a recording gage about 1,900 feet upstream are available for the period from April 1948 to September 1953 but are not given directly as continuous records in this report. Sediment measurements have been made periodically near the gage and at other nearby relatively unconfined sections of the stream for comparison with measurements at the contracted section. Sediment discharge at these relatively unconfined sections was computed from formulas for comparison with measured sediment discharges at the contracted section. A form of the Du Boys formula gave computed tonnages of sediment that were unsatisfactory. Sediment discharges as computed from the Schoklitsch formula agreed well with measured sediment discharges that were low, but they were much too low at measured sediment discharges that were higher. The Straub formula gave computed discharges, presumably of bed material, that were several times larger than measured discharges of sediment coarser than 0.125 millimeter. All three of these formulas gave computed sediment discharges that increased with water discharges much less rapidly than the measured discharges of sediment coarser than 0.125 millimeter. The Einstein procedure when applied to a reach that included 10 defined cross sections gave much better agreement between computed sediment discharge and measured sediment discharge than did anyone of the three other formulas that were used. This procedure does not compute the discharge of sediment that is too small to be found in the stream bed in appreciable quantities. Hence, total sediment discharges were obtained by adding computed discharges of sediment larger than 0.125 millimeter to measured discharges of sediment smaller than 0.125 millimeter. The size distributions of the computed sediment discharge compared poorly with the size distributions of sediment discharge at the contracted section. Ten sediment discharges computed from the Einstein procedure as applied to a single section averaged several times the measured sediment discharge for the contracted section and gave size distributions that were unsatisfactory. The Einstein procedure was modified to compute total sediment discharge at an alluvial section from readily measurable field data. The modified procedure uses measurements of bed-material particle sizes, suspended-sediment concentrations and particle sizes from depth-integrated samples, streamflow, and water temperatures. Computations of total sediment discharge were made by using this modified procedure, some for the section at the gaging station and some for each of two other relatively unconfined sections. The size distributions of the computed and the measured sediment discharges agreed reasonably well. Major advantages of this modified procedure include applicability to a single section rather than to a reach of channel, use of measured velocity instead of water-surface slope, use of depth-integrated samples, and apparently fair accuracy for computing both total sediment discharge and approximate size distribution of the sediment. Because of these advantages this modified procedure is being further studied to increase its accuracy, to simplify the required computations, and to define its limitations. In the development of the modified procedure, some relationships concerning theories of sediment transport were reviewed and checked against field data. Vertical distributions of suspended sediment at relatively unconfined sections did not agree well with theoretical dist
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.
Computed intraoperative navigation guidance--a preliminary report on a new technique.
Enislidis, G; Wagner, A; Ploder, O; Ewers, R
1997-08-01
To assess the value of a computer-assisted three-dimensional guidance system (Virtual Patient System) in maxillofacial operations. Laboratory and open clinical study. Teaching Hospital, Austria. 6 patients undergoing various procedures including removal of foreign body (n=3) and biopsy, maxillary advancement, and insertion of implants (n=1 each). Storage of computed tomographic (CT) pictures on an optical disc, and imposition of intraoperative video images on to these. The resulting display is shown to the surgeon on a micromonitor in his head-up display for guidance during the operations. To improve orientation during complex or minimally invasive maxillofacial procedures and to make such operations easier and less traumatic. Successful transferral of computed navigation technology into an operation room environment and positive evaluation of the method by the surgeons involved. Computer-assisted three-dimensional guidance systems have the potential for making complex or minimally invasive procedures easier to do, thereby reducing postoperative morbidity.
[The history and development of computer assisted orthopaedic surgery].
Jenny, J-Y
2006-10-01
Computer assisted orthopaedic surgery (CAOS) was developed to improve the accuracy of surgical procedures. It has improved dramatically over the last years, being transformed from an experimental, laboratory procedure into a routine procedure theoretically available to every orthopaedic surgeon. The first field of application of computer assistance was neurosurgery. After the application of computer guided spinal surgery, the navigation of total hip and knee joints became available. Currently, several applications for computer assisted surgery are available. At the beginning of navigation, a preoperative CT-scan or several fluoroscopic images were necessary. The imageless systems allow the surgeon to digitize patient anatomy at the beginning of surgery without any preoperative imaging. The future of CAOS remains unknown, but there is no doubt that its importance will grow in the next 10 years, and that this technology will probably modify the conventional practice of orthopaedic surgery.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine are considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analysis. The inviscid, quasi three dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous three dimensional internal flow cade for the momentum and energy equation. These boundary conditions are input to a three dimensional heat conduction code for the calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results are given.
Lee, M-Y; Chang, C-C; Ku, Y C
2008-01-01
Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.
NASA Astrophysics Data System (ADS)
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-01
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.
Design and results of the pretest of the IDEFICS study.
Suling, M; Hebestreit, A; Peplies, J; Bammann, K; Nappo, A; Eiben, G; Alvira, J M Fernández; Verbestel, V; Kovács, E; Pitsiladis, Y P; Veidebaum, T; Hadjigeorgiou, C; Knof, K; Ahrens, W
2011-04-01
During the preparatory phase of the baseline survey of the IDEFICS (Identification and prevention of dietary- and lifestyle-induced health effects in children and infants) study, standardised survey procedures including instruments, examinations, methods, biological sampling and software tools were developed and pretested for their feasibility, robustness and acceptability. A pretest was conducted of full survey procedures in 119 children aged 2-9 years in nine European survey centres (N(per centre)=4-27, mean 13.22). Novel techniques such as ultrasound measurements to assess subcutaneous fat and bone health, heart rate monitors combined with accelerometers and sensory taste perception tests were used. Biological sampling, physical examinations, sensory taste perception tests, parental questionnaire and medical interview required only minor amendments, whereas physical fitness tests required major adaptations. Callipers for skinfold measurements were favoured over ultrasonography, as the latter showed only a low-to-modest agreement with calliper measurements (correlation coefficients of r=-0.22 and r=0.67 for all children). The combination of accelerometers with heart rate monitors was feasible in school children only. Implementation of the computer-based 24-h dietary recall required a complex and intensive developmental stage. It was combined with the assessment of school meals, which was changed after the pretest from portion weighing to the more feasible observation of the consumed portion size per child. The inclusion of heel ultrasonometry as an indicator of bone stiffness was the most important amendment after the pretest. Feasibility and acceptability of all procedures had to be balanced against their scientific value. Extensive pretesting, training and subsequent refinement of the methods were necessary to assess the feasibility of all instruments and procedures in routine fieldwork and to exchange or modify procedures that would otherwise give invalid or misleading results.
Kalkan, E.; Kwong, N.
2012-01-01
The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case in the central United States) or when high-intensity records are needed (as is the case in San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure was recently developed to determine scale factors for a small number of records such that the scaled records provide accurate and efficient estimates of “true” median structural responses. The adjective “accurate” refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective “efficient” refers to the record-to-record variability of responses. In this paper, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing Ordinary Standard bridges typical of reinforced concrete bridge construction in California. These bridges are the single-bent overpass, multi-span bridge, curved bridge, and skew bridge. As compared with benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the EDPs. Thus, it is a useful tool for scaling ground motions as input to nonlinear RHAs of Ordinary Standard bridges.
Garber, Sarah T; Karsy, Michael; Kestle, John R W; Siddiqi, Faizi; Spanos, Stephen P; Riva-Cambrin, Jay
2017-10-01
Neurosurgical techniques for repair of sagittal synostosis include total cranial vault (TCV) reconstruction, open sagittal strip (OSS) craniectomy, and endoscopic strip (ES) craniectomy. To evaluate outcomes and cost associated with these 3 techniques. Via retrospective chart review with waiver of informed consent, the last consecutive 100 patients with sagittal synostosis who underwent each of the 3 surgical correction techniques before June 30, 2013, were identified. Clinical, operative, and process of care variables and their associated specific charges were analyzed along with overall charge. The study included 300 total patients. ES patients had fewer transfusion requirements (13% vs 83%, P < .001) than TCV patients, fewer days in intensive care (0.3 vs 1.3, P < .001), and a shorter overall hospital stay (1.8 vs 4.2 d, P < .001), and they required fewer revisions (1% vs 6%, P = .05). The mean charge for the endoscopic procedure was $21 203, whereas the mean charge for the TCV reconstruction was $45 078 (P < .001). ES patients had more preoperative computed tomography scans (66% vs 44%, P = .003) than OSS patients, shorter operative times (68 vs 111 min, P < .001), and required fewer revision procedures (1% vs 8%, P < .001). The mean charge for the endoscopic procedure was $21 203 vs $20 535 for the OSS procedure (P = .62). The ES craniectomy for sagittal synostosis appeared to have less morbidity and a potential cost savings compared with the TCV reconstruction. The charges were similar to those incurred with OSS craniectomy, but patients had a shorter length of stay and fewer revisions. Copyright © 2017 by the Congress of Neurological Surgeons
Fiber Optic Sensors for Temperature Monitoring during Thermal Treatments: An Overview
Schena, Emiliano; Tosi, Daniele; Saccomandi, Paola; Lewis, Elfed; Kim, Taesung
2016-01-01
During recent decades, minimally invasive thermal treatments (i.e., Radiofrequency ablation, Laser ablation, Microwave ablation, High Intensity Focused Ultrasound ablation, and Cryo-ablation) have gained widespread recognition in the field of tumor removal. These techniques induce a localized temperature increase or decrease to remove the tumor while the surrounding healthy tissue remains intact. An accurate measurement of tissue temperature may be particularly beneficial to improve treatment outcomes, because it can be used as a clear end-point to achieve complete tumor ablation and minimize recurrence. Among the several thermometric techniques used in this field, fiber optic sensors (FOSs) have several attractive features: high flexibility and small size of both sensor and cabling, allowing insertion of FOSs within deep-seated tissue; metrological characteristics, such as accuracy (better than 1 °C), sensitivity (e.g., 10 pm·°C−1 for Fiber Bragg Gratings), and frequency response (hundreds of kHz), are adequate for this application; immunity to electromagnetic interference allows the use of FOSs during Magnetic Resonance- or Computed Tomography-guided thermal procedures. In this review the current status of the most used FOSs for temperature monitoring during thermal procedure (e.g., fiber Bragg Grating sensors; fluoroptic sensors) is presented, with emphasis placed on their working principles and metrological characteristics. The essential physics of the common ablation techniques are included to explain the advantages of using FOSs during these procedures. PMID:27455273
Zidan, Ihab; Fayed, Ahmed Abdelaziz; Elwany, Amr
2018-06-26
Percutaneous vertebroplasty (PV) is a minimally invasive procedure designed to treat various spinal pathologies. The maximum number of levels to be injected at one setting is still debatable. This study was done to evaluate the usefulness and safety of multilevel PV (more than three vertebrae) in management of osteoporotic fractures. This prospective study was carried out on consecutive 40 patients with osteoporotic fractures who had been operated for multilevel PV (more than three levels). There were 28 females and 12 males and their ages ranged from 60 to 85 years with mean age of 72.5 years. We had injected 194 vertebrae in those 40 patients (four levels in 16 patients, five levels in 14 patients, and six levels in 10 patients). Visual analogue scale (VAS) was used for pain intensity measurement and plain X-ray films and computed tomography scan were used for radiological assessment. The mean follow-up period was 21.7 months (range, 12-40). Asymptomatic bone cement leakage has occurred in 12 patients (30%) in the present study. Symptomatic pulmonary embolism was observed in one patient. Significant improvement of pain was recorded immediate postoperative in 36 patients (90%). Multilevel PV for the treatment of osteoporotic fractures is a safe and successful procedure that can significantly reduce pain and improve patient's condition without a significant morbidity. It is considered a cost effective procedure allowing a rapid restoration of patient mobility.
Analytic H I-to-H2 Photodissociation Transition Profiles
NASA Astrophysics Data System (ADS)
Bialy, Shmuel; Sternberg, Amiel
2016-05-01
We present a simple analytic procedure for generating atomic (H I) to molecular ({{{H}}}2) density profiles for optically thick hydrogen gas clouds illuminated by far-ultraviolet radiation fields. Our procedure is based on the analytic theory for the structure of one-dimensional H I/{{{H}}}2 photon-dominated regions, presented by Sternberg et al. Depth-dependent atomic and molecular density fractions may be computed for arbitrary gas density, far-ultraviolet field intensity, and the metallicity-dependent H2 formation rate coefficient, and dust absorption cross section in the Lyman-Werner photodissociation band. We use our procedure to generate a set of {{H}} {{I}}{-}{to}{-}{{{H}}}2 transition profiles for a wide range of conditions, from the weak- to strong-field limits, and from super-solar down to low metallicities. We show that if presented as functions of dust optical depth, the {{H}} {{I}} and {{{H}}}2 density profiles depend primarily on the Sternberg “α G parameter” (dimensionless) that determines the dust optical depth associated with the total photodissociated {{H}} {{I}} column. We derive a universal analytic formula for the {{H}} {{I}}{-}{to}{-}{{{H}}}2 transition points as a function of just α G. Our formula will be useful for interpreting emission-line observations of H I/{{{H}}}2 interfaces, for estimating star formation thresholds, and for sub-grid components in hydrodynamics simulations.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262
Addressing the computational cost of large EIT solutions.
Boyle, Alistair; Borsic, Andrea; Adler, Andy
2012-05-01
Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.
Monitoring techniques and alarm procedures for CMS services and sites in WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.
2012-01-01
The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less
A python module to normalize microarray data by the quantile adjustment method.
Baber, Ibrahima; Tamby, Jean Philippe; Manoukis, Nicholas C; Sangaré, Djibril; Doumbia, Seydou; Traoré, Sekou F; Maiga, Mohamed S; Dembélé, Doulaye
2011-06-01
Microarray technology is widely used for gene expression research targeting the development of new drug treatments. In the case of a two-color microarray, the process starts with labeling DNA samples with fluorescent markers (cyanine 635 or Cy5 and cyanine 532 or Cy3), then mixing and hybridizing them on a chemically treated glass printed with probes, or fragments of genes. The level of hybridization between a strand of labeled DNA and a probe present on the array is measured by scanning the fluorescence of spots in order to quantify the expression based on the quality and number of pixels for each spot. The intensity data generated from these scans are subject to errors due to differences in fluorescence efficiency between Cy5 and Cy3, as well as variation in human handling and quality of the sample. Consequently, data have to be normalized to correct for variations which are not related to the biological phenomena under investigation. Among many existing normalization procedures, we have implemented the quantile adjustment method using the python computer language, and produced a module which can be run via an HTML dynamic form. This module is composed of different functions for data files reading, intensity and ratio computations and visualization. The current version of the HTML form allows the user to visualize the data before and after normalization. It also gives the option to subtract background noise before normalizing the data. The output results of this module are in agreement with the results of other normalization tools. Published by Elsevier B.V.
Resampling probability values for weighted kappa with multiple raters.
Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E
2008-04-01
A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.
10 CFR Appendix II to Part 504 - Fuel Price Computation
Code of Federal Regulations, 2010 CFR
2010-01-01
... 504—Fuel Price Computation (a) Introduction. This appendix provides the equations and parameters... inflation indices must follow standard statistical procedures and must be fully documented within the... the weighted average fuel price must follow standard statistical procedures and be fully documented...
Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.
Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian
2009-04-01
Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
de Lima, Erimara Dall'Agnol; Fleck, Caren Schlottefeld; Borges, Januário José Vieira; Condessa, Robledo Leal; Vieira, Sílvia Regina Rios
2013-01-01
Objective To evaluate the effectiveness of an educational intervention on healthcare professionals' adherence to the technical recommendations for tracheobronchial aspiration in intensive care unit patients. Methods A quasi-experimental study was performed to evaluate intensive care unit professionals' adherence to the tracheobronchial aspiration technical recommendations in intensive care unit patients both before and after a theoretical and practical educational intervention. Comparisons were performed using the chi-square test, and the significance level was set to p<0.05. Results A total of 124 procedures, pre- and post-intervention, were observed. Increased adherence was observed in the following actions: the use of personal protective equipment (p=0.01); precaution when opening the catheter package (p<0.001); the use of a sterile glove on the dominant hand to remove the catheter (p=0.003); the contact of the sterile glove with the catheter only (p<0.001); the execution of circular movements during the catheter removal (p<0.001); wrapping the catheter in the sterile glove at the end of the procedure (p=0.003); the use of distilled water, opened at the start of the procedure, to wash the connection latex (p=0.002); the disposal of the leftover distilled water at the end of the procedure (p<0.001); and the performance of the aspiration technique procedures (p<0.001). Conclusion There was a low adherence by health professionals to the preventive measures against hospital infection, indicating the need to implement educational strategies. The educational intervention used was shown to be effective in increasing adherence to the technical recommendations for tracheobronchial aspiration. PMID:23917976
Benefits of computer screen-based simulation in learning cardiac arrest procedures.
Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc
2010-07-01
What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing
NASA Astrophysics Data System (ADS)
Decyk, V. K.; Dauger, D. E.
We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.
ERIC Educational Resources Information Center
Purrazzella, Kimberly; Mechling, Linda C.
2013-01-01
The study employed a multiple probe design to investigate the effects of computer-based instruction (CBI) and a forward chaining procedure to teach manual spelling of words to three young adults with moderate intellectual disability in a small group arrangement. The computer-based program included a tablet PC whereby students wrote words directly…
Computer-Aided Design Of Turbine Blades And Vanes
NASA Technical Reports Server (NTRS)
Hsu, Wayne Q.
1988-01-01
Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.
NASA Technical Reports Server (NTRS)
Huffman, S.
1977-01-01
Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.
User's manual for a computer program for simulating intensively managed allowable cut.
Robert W. Sassaman; Ed Holt; Karl Bergsvik
1972-01-01
Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....
PNNLs Data Intensive Computing research battles Homeland Security threats
David Thurman; Joe Kielman; Katherine Wolf; David Atkinson
2018-05-11
The Pacific Northwest National Laboratorys (PNNL's) approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architecture, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.
PNNL pushing scientific discovery through data intensive computing breakthroughs
Deborah Gracio; David Koppenaal; Ruby Leung
2018-05-18
The Pacific Northwest National Laboratory's approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.
ERIC Educational Resources Information Center
Ramsberger, Gail; Marie, Basem
2007-01-01
Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Dudu; Yang, Sichun; Lu, Lanyuan
2016-06-20
Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less
Procedural Quantum Programming
NASA Astrophysics Data System (ADS)
Ömer, Bernhard
2002-09-01
While classical computing science has developed a variety of methods and programming languages around the concept of the universal computer, the typical description of quantum algorithms still uses a purely mathematical, non-constructive formalism which makes no difference between a hydrogen atom and a quantum computer. This paper investigates, how the concept of procedural programming languages, the most widely used classical formalism for describing and implementing algorithms, can be adopted to the field of quantum computing, and how non-classical features like the reversibility of unitary transformations, the non-observability of quantum states or the lack of copy and erase operations can be reflected semantically. It introduces the key concepts of procedural quantum programming (hybrid target architecture, operator hierarchy, quantum data types, memory management, etc.) and presents the experimental language QCL, which implements these principles.
Establishing intensively cultured hybrid poplar plantations for fuel and fiber.
Edward Hansen; Lincoln Moore; Daniel Netzer; Michael Ostry; Howard Phipps; Jaroslav Zavitkovski
1983-01-01
This paper describes a step-by-step procedure for establishing commercial size intensively cultured plantations of hybrid poplar and summarizes the state-of-knowledge as developed during 10 years of field research at Rhinelander, Wisconsin.
The cost of open heart surgery in Nigeria.
Falase, Bode; Sanusi, Michael; Majekodunmi, Adetinuwe; Ajose, Ifeoluwa; Idowu, Ariyo; Oke, David
2013-01-01
Open Heart Surgery (OHS) is not commonly practiced in Nigeria and most patients who require OHS are referred abroad. There has recently been a resurgence of interest in establishing OHS services in Nigeria but the cost is unknown. The aim of this study was to determine the direct cost of OHS procedures in Nigeria. The study was performed prospectively from November to December 2011. Three concurrent operations were selected as being representative of the scope of surgery offered at our institution. These procedures were Atrial Septal Defect (ASD) Repair, Off Pump Coronary Artery Bypass Grafting (OPCAB) and Mitral Valve Replacement (MVR). Cost categories contributing to direct costs of OHS (Investigations, Drugs, Perfusion, Theatre, Intensive Care, Honorarium and Hospital Stay) were tracked to determine the total direct cost for the 3 selected OHS procedures. ASD repair cost $ 6,230 (Drugs $600, Intensive Care $410, Investigations $955, Perfusion $1080, Theatre $1360, Honorarium $925, Hospital Stay $900). OPCAB cost $8,430 (Drugs $740, Intensive Care $625, Investigations $3,020, Perfusion $915, Theatre $1305, Honorarium $925, Hospital Stay $900). MVR with a bioprosthetic valve cost $11,200 (Drugs $1200, Intensive Care $500, Investigations $3040, Perfusion $1100, Theatre $3,535, Honorarium $925, Hospital Stay $900). The direct cost of OHS in Nigeria currently ranges between $6,230 and $11,200. These costs compare favorably with the cost of OHS abroad and can serve as a financial incentive to patients, sponsors and stakeholders to have OHS procedures done in Nigeria.
Nonlinear system guidance in the presence of transmission zero dynamics
NASA Technical Reports Server (NTRS)
Meyer, G.; Hunt, L. R.; Su, R.
1995-01-01
An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.
Federico, Alejandro; Kaufmann, Guillermo H
2005-05-10
We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.
Correction factors for on-line microprobe analysis of multielement alloy systems
NASA Technical Reports Server (NTRS)
Unnam, J.; Tenney, D. R.; Brewer, W. D.
1977-01-01
An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.
14 CFR 13.85 - Filing, service and computation of time.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...
14 CFR 13.85 - Filing, service and computation of time.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...
14 CFR 13.85 - Filing, service and computation of time.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...
14 CFR 13.85 - Filing, service and computation of time.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...
14 CFR 13.85 - Filing, service and computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...
Critical care procedure logging using handheld computers
Carlos Martinez-Motta, J; Walker, Robin; Stewart, Thomas E; Granton, John; Abrahamson, Simon; Lapinsky, Stephen E
2004-01-01
Introduction We conducted this study to evaluate the feasibility of implementing an internet-linked handheld computer procedure logging system in a critical care training program. Methods Subspecialty trainees in the Interdepartmental Division of Critical Care at the University of Toronto received and were trained in the use of Palm handheld computers loaded with a customized program for logging critical care procedures. The procedures were entered into the handheld device using checkboxes and drop-down lists, and data were uploaded to a central database via the internet. To evaluate the feasibility of this system, we tracked the utilization of this data collection system. Benefits and disadvantages were assessed through surveys. Results All 11 trainees successfully uploaded data to the central database, but only six (55%) continued to upload data on a regular basis. The most common reason cited for not using the system pertained to initial technical problems with data uploading. From 1 July 2002 to 30 June 2003, a total of 914 procedures were logged. Significant variability was noted in the number of procedures logged by individual trainees (range 13–242). The database generated by regular users provided potentially useful information to the training program director regarding the scope and location of procedural training among the different rotations and hospitals. Conclusion A handheld computer procedure logging system can be effectively used in a critical care training program. However, user acceptance was not uniform, and continued training and support are required to increase user acceptance. Such a procedure database may provide valuable information that may be used to optimize trainees' educational experience and to document clinical training experience for licensing and accreditation. PMID:15469577
Frickmann, H; Bachert, S; Warnke, P; Podbielski, A
2018-03-01
Preanalytic aspects can make results of hygiene studies difficult to compare. Efficacy of surface disinfection was assessed with an evaluated swabbing procedure. A validated microbial screening of surfaces was performed in the patients' environment and from hands of healthcare workers on two intensive care units (ICUs) prior to and after a standardized disinfection procedure. From a pure culture, the recovery rate of the swabs for Staphylococcus aureus was 35%-64% and dropped to 0%-22% from a mixed culture with 10-times more Staphylococcus epidermidis than S. aureus. Microbial surface loads 30 min before and after the cleaning procedures were indistinguishable. The quality-ensured screening procedure proved that adequate hygiene procedures are associated with a low overall colonization of surfaces and skin of healthcare workers. Unchanged microbial loads before and after surface disinfection demonstrated the low additional impact of this procedure in the endemic situation when the pathogen load prior to surface disinfection is already low. Based on a validated screening system ensuring the interpretability and reliability of the results, the study confirms the efficiency of combined hand and surface hygiene procedures to guarantee low rates of bacterial colonization. © 2017 The Society for Applied Microbiology.
Impedance computations and beam-based measurements: A problem of discrepancy
NASA Astrophysics Data System (ADS)
Smaluk, Victor
2018-04-01
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.
Unsteady thermal blooming of intense laser beams
NASA Astrophysics Data System (ADS)
Ulrich, J. T.; Ulrich, P. B.
1980-01-01
A four dimensional (three space plus time) computer program has been written to compute the nonlinear heating of a gas by an intense laser beam. Unsteady, transient cases are capable of solution and no assumption of a steady state need be made. The transient results are shown to asymptotically approach the steady-state results calculated by the standard three dimensional thermal blooming computer codes. The report discusses the physics of the laser-absorber interaction, the numerical approximation used, and comparisons with experimental data. A flowchart is supplied in the appendix to the report.
MSFC crack growth analysis computer program, version 2 (users manual)
NASA Technical Reports Server (NTRS)
Creager, M.
1976-01-01
An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.
Computation of glint, glare, and solar irradiance distribution
Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh
2017-08-01
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.
Computation of glint, glare, and solar irradiance distribution
Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh
2015-08-11
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.
Nonstimulated rabbit phonation model: Cricothyroid approximation.
Novaleski, Carolyn K; Kojima, Tsuyoshi; Chang, Siyuan; Luo, Haoxiang; Valenzuela, Carla V; Rousseau, Bernard
2016-07-01
To describe a nonstimulated in vivo rabbit phonation model using an Isshiki type IV thyroplasty and uninterrupted humidified glottal airflow to produce sustained audible phonation. Prospective animal study. Six New Zealand white breeder rabbits underwent a surgical procedure involving an Isshiki type IV thyroplasty and continuous airflow delivered to the glottis. Phonatory parameters were examined using high-speed laryngeal imaging and acoustic and aerodynamic analysis. Following the procedure, airflow was discontinued, and sutures remained in place to maintain the phonatory glottal configuration for microimaging using a 9.4 Tesla imaging system. High-speed laryngeal imaging revealed sustained vocal fold oscillation throughout the experimental procedure. Analysis of acoustic signals revealed a mean vocal intensity of 61 dB and fundamental frequency of 590 Hz. Aerodynamic analysis revealed a mean airflow rate of 85.91 mL/s and subglottal pressure of 9 cm H2 O. Following the procedure, microimaging revealed that the in vivo phonatory glottal configuration was maintained, providing consistency between the experimental and postexperimental laryngeal geometry. The latter provides a significant milestone that is necessary for geometric reconstruction and to allow for validation of computational simulations against the in vivo rabbit preparation. We demonstrate a nonstimulated in vivo phonation preparation using an Isshiki type IV thyroplasty and continuous humidified glottal airflow in a rabbit animal model. This preparation elicits sustained vocal fold vibration and phonatory measures that are consistent with our laboratory's prior work using direct neuromuscular stimulation for evoked phonation. N/A. Laryngoscope, 126:1589-1594, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Late complications of percutaneous tracheostomy using the balloon dilation technique.
Araujo, J B; Añón, J M; García de Lorenzo, A; García-Fernandez, A M; Esparcia, M; Adán, J; Relanzon, S; Quiles, D; de Paz, V; Molina, A
2018-04-01
The purpose of this study was to determine the late complications in critically ill patients requiring percutaneous tracheostomy (PT) using the balloon dilation technique. A prospective, observational cohort study was carried out. Two medical-surgical intensive care units (ICU). All mechanically ventilated adult patients consecutively admitted to the ICU with an indication of tracheostomy. All patients underwent PT according to the Ciaglia Blue Dolphin ® method, with endoscopic guidance. Survivors were interviewed and evaluated by fiberoptic laryngotracheoscopy and tracheal computed tomography at least 6 months after decannulation. Intraoperative, postoperative and long-term complications and mortality (in-ICU, in-hospital) were recorded. A total of 114 patients were included. The most frequent perioperative complication was minor bleeding (n=20) and difficult cannula insertion (n=19). Two patients had severe perioperative complications (1.7%) (major bleeding and inability to complete de procedure in one case and false passage and desaturation in the other). All survivors (n=52) were evaluated 211±28 days after decannulation. None of the patients had symptoms. Fiberoptic laryngotracheoscopy and computed tomography showed severe tracheal stenosis (>50%) in 2patients (3.7%), both with a cannulation period of over 100 days. Percutaneous tracheostomy using the Ciaglia Blue Dolphin ® technique with an endoscopic guide is a safe procedure. Severe tracheal stenosis is a late complication which although infrequent, must be taken into account due to its lack of clinical expressiveness. Evaluation should be considered in those tracheostomized critical patients who have been cannulated for a long time. Copyright © 2017 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan
2016-08-01
A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.
NASA Astrophysics Data System (ADS)
Gillies, Derek J.; Gardi, Lori; Zhao, Ren; Fenster, Aaron
2017-03-01
During image-guided prostate biopsy, needles are targeted at suspicious tissues to obtain specimens that are later examined histologically for cancer. Patient motion causes inaccuracies when using MR-transrectal ultrasound (TRUS) image fusion approaches used to augment the conventional biopsy procedure. Motion compensation using a single, user initiated correction can be performed to temporarily compensate for prostate motion, but a real-time continuous registration offers an improvement to clinical workflow by reducing user interaction and procedure time. An automatic motion compensation method, approaching the frame rate of a TRUS-guided system, has been developed for use during fusion-based prostate biopsy to improve image guidance. 2D and 3D TRUS images of a prostate phantom were registered using an intensity based algorithm utilizing normalized cross-correlation and Powell's method for optimization with user initiated and continuous registration techniques. The user initiated correction performed with observed computation times of 78 ± 35 ms, 74 ± 28 ms, and 113 ± 49 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.5 ± 0.5 mm, 1.5 ± 1.4 mm, and 1.5 ± 1.6°. The continuous correction performed significantly faster (p < 0.05) than the user initiated method, with observed computation times of 31 ± 4 ms, 32 ± 4 ms, and 31 ± 6 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.2 ± 0.2 mm, 0.6 ± 0.5 mm, and 0.8 ± 0.4°.
Continuing quality improvement procedures for a clinical PACS.
Andriole, K P; Gould, R G; Avrin, D E; Bazzill, T M; Yin, L; Arenson, R L
1998-08-01
The University of California at San Francisco (USCF) Department of Radiology currently has a clinically operational picture archiving and communication system (PACS) that is thirty-five percent filmless, with the goal of becoming seventy-five percent filmless within the year. The design and implementation of the clinical PACS has been a collaborative effort between an academic research laboratory and a commercial vendor partner. Images are digitally acquired from three computed radiography (CR) scanners, five computed tomography (CT) scanners, five magnetic resonance (MR) imagers, three digital fluoroscopic rooms, an ultrasound mini-PACS and a nuclear medicine mini-PACS. The DICOM (Digital Imaging and Communications in Medicine) standard communications protocol and image format is adhered to throughout the PACS. Images are archived in hierarchical staged fashion, on a RAID (redundant array of inexpensive disks) and on magneto-optical disk jukeboxes. The clinical PACS uses an object-oriented Oracle SQL (systems query language) database, and interfaces to the Radiology Information System using the HL7 (Health Languages 7) standard. Components are networked using a combination of switched and fast ethernet, and ATM (asynchronous transfer mode), all over fiber optics. The wide area network links six UCSF sites in San Francisco. A combination of high and medium resolution dual-monitor display stations have been placed throughout the Department of Radiology, the Emergency Department (ED) and Intensive Care Units (ICU). A continuing quality improvement (CQI) committee has been formed to facilitate the PACS installation and training, workflow modifications, quality assurance and clinical acceptance. This committee includes radiologists at all levels (resident, fellow, attending), radiology technologists, film library personnel, ED and ICU clinician end-users, and PACS team members. The CQI committee has proved vital in the creation of new management procedures, providing a means for user feedback and education, and contributing to the overall acceptance of, and user satisfaction with the system. Well developed CQI procedures have been essential to the successful clinical operation of the PACS as UCSF Radiology moves toward a filmless department.
An updated climatology of explosive cyclones using alternative measures of cyclone intensity
NASA Astrophysics Data System (ADS)
Hanley, J.; Caballero, R.
2009-04-01
Using a novel cyclone tracking and identification method, we compute a climatology of explosively intensifying cyclones or ‘bombs' using the ERA-40 and ERA-Interim datasets. Traditionally, ‘bombs' have been identified using a central pressure deepening rate criterion (Sanders and Gyakum, 1980). We investigate alternative methods of capturing such extreme cyclones. These methods include using the maximum wind contained within the cyclone, and using a potential vorticity column measure within such systems, as a measure of intensity. Using the different measures of cyclone intensity, we construct and intercompare maps of peak cyclone intensity. We also compute peak intensity probability distributions, and assess the evidence for the bi-modal distribution found by Roebber (1984). Finally, we address the question of the relationship between storm intensification rate and storm destructiveness: are ‘bombs' the most destructive storms?
Determination of vehicle density from traffic images at day and nighttime
NASA Astrophysics Data System (ADS)
Mehrübeoğlu, Mehrübe; McLauchlan, Lifford
2007-02-01
In this paper we extend our previous work to address vehicle differentiation in traffic density computations1. The main goal of this work is to create vehicle density history for given roads under different weather or light conditions and at different times of the day. Vehicle differentiation is important to account for connected or otherwise long vehicles, such as trucks or tankers, which lead to over-counting with the original algorithm. Average vehicle size in pixels, given the magnification within the field of view for a particular camera, is used to separate regular cars and long vehicles. A separate algorithm and procedure have been developed to determine traffic density after dark when the vehicle headlights are turned on. Nighttime vehicle recognition utilizes blob analysis based on head/taillight images. The high intensity of vehicle lights are identified in binary images for nighttime vehicle detection. The stationary traffic image frames are downloaded from the internet as they are updated. The procedures are implemented in MATLAB. The results of both nighttime traffic density and daytime long vehicle identification algorithms are described in this paper. The determination of nighttime traffic density, and identification of long vehicles at daytime are improvements over the original work1.
NASA Astrophysics Data System (ADS)
Shcherbakov, V. P.; Sycheva, N. K.; Gribov, S. K.
2017-09-01
The results of the Thellier-Coe experiments on paleointensity determination on the samples which contain chemical remanent magnetization (CRM) created by thermal annealing of titanomagnetites are reported. The results of the experiments are compared with the theoretical notions. For this purpose, Monte Carlo simulation of the process of CRM acquisition in the system of single-domain interacting particles was carried out; the paleointensity determination method based on the Thellier-Coe procedure was modeled; and the degree of paleointensity underestimation was quantitatively estimated based on the experimental data and on the numerical results. Both the experimental investigations and computer modeling suggest the following main conclusion: all the Arai-Nagata diagrams for CRM in the high-temperature area (in some cases up to the Curie temperature T c) contain a relatively long quasi-linear interval on which it is possible to estimate the slope coefficient k and, therefore, the paleointensity. Hence, if chemical magnetization (or remagnetization) took place in the course of the magnetomineralogical transformations of titanomagnetite- bearing igneous rocks during long-lasting cooling or during repeated heatings, it can lead to incorrect results in determining the intensity of the geomagnetic field in the geological past.
Klar, Fabian; Urbanetz, Nora Anne
2016-10-01
Solubility parameters of HPMCAS have not yet been investigated intensively. On this account, total and three-dimensional solubility parameters of HPMCAS were determined by using different experimental as well as computational methods. In addition, solubility properties of HPMCAS in a huge number of solvents were tested and a Teas plot for HPMCAS was created. The total solubility parameter of about 24 MPa(0.5) was confirmed by various procedures and compared with values of plasticizers. Twenty common pharmaceutical plasticizers were evaluated in terms of their suitability for supporting film formation of HPMCAS under dry coating conditions. Therefore, glass transition temperatures of mixtures of polymer and plasticizers were inspected and film formation of potential ones was further investigated in dry coating of pellets. Contact angles of plasticizers on HPMCAS were determined in order to give a hint of achievable coating efficiencies in dry coating, but none was found to spread on HPMCAS. A few common substances, e.g. dimethyl phthalate, glycerol monocaprylate, and polyethylene glycol 400, enabled plasticization of HPMCAS; however, only triethyl citrate and triacetin were found to be suitable for use in dry coating. Addition of acetylated monoglycerides to triacetin increased coating efficiency, which was likewise previously demonstrated for triethyl citrate.
Toward a standard line for use in multibeam echo sounder calibration
NASA Astrophysics Data System (ADS)
Weber, Thomas C.; Rice, Glen; Smith, Michael
2018-06-01
A procedure is suggested in which a relative calibration for the intensity output of a multibeam echo sounder (MBES) can be performed. This procedure identifies a common survey line (i.e., a standard line), over which acoustic backscatter from the seafloor is collected with multiple MBES systems or by the same system multiple times. A location on the standard line which exhibits temporal stability in its seafloor backscatter response is used to bring the intensity output of the multiple MBES systems to a common reference. This relative calibration procedure has utility for MBES users wishing to generate an aggregate seafloor backscatter mosaic using multiple systems, revisiting an area to detect changes in substrate type, and comparing substrate types in the same general area but with different systems or different system settings. The calibration procedure is demonstrated using three different MBES systems over 3 different years in New Castle, NH, USA.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
CBP for Field Workers – Results and Insights from Three Usability and Interface Design Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Bly, Aaron Douglas
2015-09-01
Nearly all activities that involve human interaction with the systems in a nuclear power plant are guided by procedures. Even though the paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety, improving procedure use could yield significant savings in increased efficiency as well as improved nuclear safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use and adherence, researchers in the Light-Water Reactor Sustainability (LWRS) Program, togethermore » with the nuclear industry, have been investigating the possibility and feasibility of replacing the current paper-based procedure process with a computer-based procedure (CBP) system. This report describes a field evaluation of new design concepts of a prototype computer-based procedure system.« less
Nurturing Students' Problem-Solving Skills and Engagement in Computer-Mediated Communications (CMC)
ERIC Educational Resources Information Center
Chen, Ching-Huei
2014-01-01
The present study sought to investigate how to enhance students' well- and ill-structured problem-solving skills and increase productive engagement in computer-mediated communication with the assistance of external prompts, namely procedural and reflection. Thirty-three graduate students were randomly assigned to two conditions: procedural and…
Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses
ERIC Educational Resources Information Center
Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo
2018-01-01
Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…
14 CFR 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2012 CFR
2012-01-01
... paragraph of this section shall be applied as indicated. The procedure for computing Shuttle load factor, charge factor, and flight price for Spacelab payloads replaces the procedure contained in the Shuttle policy. (2) Shuttle charge factors as derived herein apply to the standard mission destination of 160 nmi...
14 CFR 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2013 CFR
2013-01-01
... paragraph of this section shall be applied as indicated. The procedure for computing Shuttle load factor, charge factor, and flight price for Spacelab payloads replaces the procedure contained in the Shuttle policy. (2) Shuttle charge factors as derived herein apply to the standard mission destination of 160 nmi...
14 CFR § 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2014 CFR
2014-01-01
... paragraph of this section shall be applied as indicated. The procedure for computing Shuttle load factor, charge factor, and flight price for Spacelab payloads replaces the procedure contained in the Shuttle policy. (2) Shuttle charge factors as derived herein apply to the standard mission destination of 160 nmi...
14 CFR 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2011 CFR
2011-01-01
... paragraph of this section shall be applied as indicated. The procedure for computing Shuttle load factor, charge factor, and flight price for Spacelab payloads replaces the procedure contained in the Shuttle policy. (2) Shuttle charge factors as derived herein apply to the standard mission destination of 160 nmi...
DOT National Transportation Integrated Search
2007-08-01
This research was conducted to develop and test a personal computer-based study procedure (PCSP) with secondary task loading for use in human factors laboratory experiments in lieu of a driving simulator to test reading time and understanding of traf...
78 FR 5075 - Energy Conservation Program: Test Procedure for Set-Top Boxes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-23
... necessary procedures. Please also note that those wishing to bring laptops into the Forrestal Building will be required to obtain a property pass. Visitors should avoid bringing laptops, or allow an extra 45... (MVPD) could inadvertently bring tablet computers, computers, gaming consoles, and smartphones under...
NASA Technical Reports Server (NTRS)
Denn, F. M.
1978-01-01
Geometric input plotting to the VORLAX computer program by means of an interactive remote terminal is reported. The software consists of a procedure file and two programs. The programs and procedure file are described and a sample execution is presented.
Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)
NASA Technical Reports Server (NTRS)
Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.
1972-01-01
A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
Wagner, Richard J.; Mattraw, Harold C.; Ritz, George F.; Smith, Brett A.
2000-01-01
The U.S. Geological Survey uses continuous water-quality monitors to assess variations in the quality of the Nation's surface water. A common system configuration for data collection is the four-parameter water-quality monitoring system, which collects temperature, specific conductance, dissolved oxygen, and pH data, although systems can be configured to measure other properties such as turbidity or chlorophyll. The sensors that are used to measure these water properties require careful field observation, cleaning, and calibration procedures, as well as thorough procedures for the computation and publication of final records. Data from sensors can be used in conjunction with collected samples and chemical analyses to estimate chemical loads. This report provides guidelines for site-selection considerations, sensor test methods, field procedures, error correction, data computation, and review and publication processes. These procedures have evolved over the past three decades, and the process continues to evolve with newer technologies.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
Stability assessment of structures under earthquake hazard through GRID technology
NASA Astrophysics Data System (ADS)
Prieto Castrillo, F.; Boton Fernandez, M.
2009-04-01
This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.
Computer-oriented emissions inventory procedure for urban and industrial sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Runca, E.; Zannetti, P.; Melli, P.
1978-06-01
A knowledge of the rate of emission of atmospheric pollutants is essential for the enforcement of air quality control policies. A computer-oriented emission inventory procedure has been developed and applied to Venice, Italy. By using optically readable forms this procedure avoids many of the errors inherent in the transcription and punching steps typical of approaches applied so far. Moreover, this procedure allows an easy updating of the inventory. Emission patterns of SO/sub 2/ in the area of Venice showed that the total urban emissions were about 6% of those emitted by industrial sources.
NASA Astrophysics Data System (ADS)
Trani, L.; Spinuso, A.; Galea, M.; Atkinson, M.; Van Eck, T.; Vilotte, J.
2011-12-01
The data bonanza generated by today's digital revolution is forcing scientists to rethink their methodologies and working practices. Traditional approaches to knowledge discovery are pushed to their limit and struggle to keep apace with the data flows produced by modern systems. This work shows how the ADMIRE data-intensive architecture supports seismologists by enabling them to focus on their scientific goals and questions, abstracting away the underlying technology platform that enacts their data integration and analysis tasks. ADMIRE accomplishes this partly by recognizing 3 different types of experts that require clearly defined interfaces between their interaction: the domain expert who is the application specialist, the data-analysis expert who is a specialist in extracting information from data, and the data-intensive engineer who develops the infrastructure for data-intensive computation. In order to provide a context in which each category of expert may flourish, ADMIRE uses a 3-level architecture: the upper - tool - level supports the work of both domain and data-analysis experts, housing an extensive and evolving set of portals, tools and development environments; the lower - enactment - level houses a large and dynamic community of providers delivering data and data-intensive enactment environments as an evolving infrastructure that supports all of the work underway in the upper layer. Most data-intensive engineers work here; the crucial innovation lies in the middle level, a gateway that is a tightly defined and stable interface through which the two diverse and dynamic upper and lower layers communicate. This is a minimal and simple protocol and language (DISPEL), ultimately to be controlled by standards, so that the upper and lower communities may invest, secure in the knowledge that changes in this interface will be carefully managed. We implemented a well-established procedure for processing seismic ambient noise on the prototype architecture. The primary goal was to evaluate its capabilities for large-scale integration and analysis of distributed data. A secondary goal was to gauge its potential and the added value that it might bring to the seismological community. Though still in its infant state, the architecture met the demands of our use case and promises to cater for our future requirements. We shall continue to develop its capabilities as part of an EU funded project VERCE - Virtual Earthquake and Seismology Research Community for Europe. VERCE aims to significantly advance our understanding of the Earth in order to aid society in its management of natural resources and hazards. Its strategy is to enable seismologists to fully exploit the under-utilized wealth of seismic data, and key to this is a data-intensive computation framework adapted to the scale and diversity of the community. This is a first step in building a data-intensive highway for geoscientists, smoothing their travel from the primary sources of data to new insights and rapid delivery of actionable information.
Campion, Thomas R.; Waitman, Lemuel R.; May, Addison K.; Ozdas, Asli; Lorenzi, Nancy M.; Gadd, Cynthia S.
2009-01-01
Introduction: Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. Results: This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Discussion: Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. Conclusion: This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. PMID:19815452
Using GOMS models and hypertext to create representations of medical procedures for online display
NASA Technical Reports Server (NTRS)
Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne
1991-01-01
This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.
Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; García, Sebastián Gimeno
2013-05-01
Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.
Recursive Newton-Euler formulation of manipulator dynamics
NASA Technical Reports Server (NTRS)
Nasser, M. G.
1989-01-01
A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.
Solution of quadratic matrix equations for free vibration analysis of structures.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.
Efficient and robust relaxation procedures for multi-component mixtures including phase transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de
We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less
Johnson, Steven M.; Swanson, Robert B.
1994-01-01
Prototype stream-monitoring sites were operated during part of 1992 in the Central Nebraska Basins (CNBR) and three other study areas of the National Water-Quality Assessment (NAWQ) Program of the U.S. Geological Survey. Results from the prototype project provide information needed to operate a net- work of intensive fixed station stream-monitoring sites. This report evaluates operating procedures for two NAWQA prototype sites at Maple Creek near Nickerson and the Platte River at Louisville, eastern Nebraska. Each site was sampled intensively in the spring and late summer 1992, with less intensive sampling in midsummer. In addition, multiple samples were collected during two high- flow periods at the Maple Creek site--one early and the other late in the growing season. Water-samples analyses included determination of pesticides, nutrients, major ions, suspended sediment, and measurements of physical properties. Equipment and protocols for the water-quality sampling procedures were evaluated. Operation of the prototype stream- monitoring sites included development and comparison of onsite and laboratory sample-processing proce- dures. Onsite processing was labor intensive but allowed for immediate preservation of all sampled constituents. Laboratory processing required less field labor and decreased the risk of contamination, but allowed for no immediate preservation of the samples.
2013-01-01
Background Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD). The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. Methods A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Results Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). Conclusions The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users. PMID:23915209
The hack attack - Increasing computer system awareness of vulnerability threats
NASA Technical Reports Server (NTRS)
Quann, John; Belford, Peter
1987-01-01
The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, Ahmed A., E-mail: asaleh@uow.edu.au
Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration.more » It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dadfarnia, Mohsen; Nibur, Kevin A.; San Marchi, Christopher W.
2010-07-01
Threshold stress intensity factors were measured in high-pressure hydrogen gas for a variety of low alloy ferritic steels using both constant crack opening displacement and rising crack opening displacement procedures. The sustained load cracking procedures are generally consistent with those in ASME Article KD-10 of Section VIII Division 3 of the Boiler and Pressure Vessel Code, which was recently published to guide design of high-pressure hydrogen vessels. Three definitions of threshold were established for the two test methods: K{sub THi}* is the maximum applied stress intensity factor for which no crack extension was observed under constant displacement; K{sub THa} ismore » the stress intensity factor at the arrest position for a crack that extended under constant displacement; and K{sub JH} is the stress intensity factor at the onset of crack extension under rising displacement. The apparent crack initiation threshold under constant displacement, K{sub THi}*, and the crack arrest threshold, K{sub THa}, were both found to be non-conservative due to the hydrogen exposure and crack-tip deformation histories associated with typical procedures for sustained-load cracking tests under constant displacement. In contrast, K{sub JH}, which is measured under concurrent rising displacement and hydrogen gas exposure, provides a more conservative hydrogen-assisted fracture threshold that is relevant to structural components in which sub-critical crack extension is driven by internal hydrogen gas pressure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nibur, Kevin A.
2010-11-01
Threshold stress intensity factors were measured in high-pressure hydrogen gas for a variety of low alloy ferritic steels using both constant crack opening displacement and rising crack opening displacement procedures. The sustained load cracking procedures are generally consistent with those in ASME Article KD-10 of Section VIII Division 3 of the Boiler and Pressure Vessel Code, which was recently published to guide design of high-pressure hydrogen vessels. Three definitions of threshold were established for the two test methods: K{sub THi}* is the maximum applied stress intensity factor for which no crack extension was observed under constant displacement; K{sub THa} ismore » the stress intensity factor at the arrest position for a crack that extended under constant displacement; and K{sub JH} is the stress intensity factor at the onset of crack extension under rising displacement. The apparent crack initiation threshold under constant displacement, K{sub THi}*, and the crack arrest threshold, K{sub THa}, were both found to be non-conservative due to the hydrogen exposure and crack-tip deformation histories associated with typical procedures for sustained-load cracking tests under constant displacement. In contrast, K{sub JH}, which is measured under concurrent rising displacement and hydrogen gas exposure, provides a more conservative hydrogen-assisted fracture threshold that is relevant to structural components in which sub-critical crack extension is driven by internal hydrogen gas pressure.« less
Classical conditioning without verbal suggestions elicits placebo analgesia and nocebo hyperalgesia
Bajcar, Elżbieta A.; Adamczyk, Wacław; Kicman, Paweł; Lisińska, Natalia; Świder, Karolina; Colloca, Luana
2017-01-01
The aim of this study was to examine the relationships among classical conditioning, expectancy, and fear in placebo analgesia and nocebo hyperalgesia. A total of 42 healthy volunteers were randomly assigned to three groups: placebo, nocebo, and control. They received 96 electrical stimuli, preceded by either orange or blue lights. A hidden conditioning procedure, in which participants were not informed about the meaning of coloured lights, was performed in the placebo and nocebo groups. Light of one colour was paired with pain stimuli of moderate intensity (control stimuli), and light of the other colour was paired with either nonpainful stimuli (in the placebo group) or painful stimuli of high intensity (in the nocebo group). In the control group, both colour lights were followed by control stimuli of moderate intensity without any conditioning procedure. Participants rated pain intensity, expectancy of pain intensity, and fear. In the testing phase, when both of the coloured lights were followed by identical moderate pain stimuli, we found a significant analgesic effect in the placebo group, and a significant hyperalgesic effect in the nocebo group. Neither expectancy nor fear ratings predicted placebo analgesia or nocebo hyperalgesia. It appears that a hidden conditioning procedure, without any explicit verbal suggestions, elicits placebo and nocebo effects, however we found no evidence that these effects are predicted by either expectancy or fear. These results suggest that classical conditioning may be a distinct mechanism for placebo and nocebo effects. PMID:28750001
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Murthy, Pappu L. N.; Chamis, Christos C.
1994-01-01
A computational simulation procedure is presented for nonlinear analyses which incorporates microstress redistribution due to progressive fracture in ceramic matrix composites. This procedure facilitates an accurate simulation of the stress-strain behavior of ceramic matrix composites up to failure. The nonlinearity in the material behavior is accounted for at the constituent (fiber/matrix/interphase) level. This computational procedure is a part of recent upgrades to CEMCAN (Ceramic Matrix Composite Analyzer) computer code. The fiber substructuring technique in CEMCAN is used to monitor the damage initiation and progression as the load increases. The room-temperature tensile stress-strain curves for SiC fiber reinforced reaction-bonded silicon nitride (RBSN) matrix unidirectional and angle-ply laminates are simulated and compared with experimentally observed stress-strain behavior. Comparison between the predicted stress/strain behavior and experimental stress/strain curves is good. Collectively the results demonstrate that CEMCAN computer code provides the user with an effective computational tool to simulate the behavior of ceramic matrix composites.
Gallo, Carlos; Pantin, Hilda; Villamar, Juan; Prado, Guillermo; Tapia, Maria; Ogihara, Mitsunori; Cruden, Gracelyn; Brown, C Hendricks
2015-09-01
Careful fidelity monitoring and feedback are critical to implementing effective interventions. A wide range of procedures exist to assess fidelity; most are derived from observational assessments (Schoenwald and Garland, Psycholog Assess 25:146-156, 2013). However, these fidelity measures are resource intensive for research teams in efficacy/effectiveness trials, and are often unattainable or unmanageable for the host organization to rate when the program is implemented on a large scale. We present a first step towards automated processing of linguistic patterns in fidelity monitoring of a behavioral intervention using an innovative mixed methods approach to fidelity assessment that uses rule-based, computational linguistics to overcome major resource burdens. Data come from an effectiveness trial of the Familias Unidas intervention, an evidence-based, family-centered preventive intervention found to be efficacious in reducing conduct problems, substance use and HIV sexual risk behaviors among Hispanic youth. This computational approach focuses on "joining," which measures the quality of the working alliance of the facilitator with the family. Quantitative assessments of reliability are provided. Kappa scores between a human rater and a machine rater for the new method for measuring joining reached 0.83. Early findings suggest that this approach can reduce the high cost of fidelity measurement and the time delay between fidelity assessment and feedback to facilitators; it also has the potential for improving the quality of intervention fidelity ratings.
Gallo, Carlos; Pantin, Hilda; Villamar, Juan; Prado, Guillermo; Tapia, Maria; Ogihara, Mitsunori; Cruden, Gracelyn; Brown, C Hendricks
2014-01-01
Careful fidelity monitoring and feedback are critical to implementing effective interventions. A wide range of procedures exist to assess fidelity; most are derived from observational assessments (Schoenwald et al, 2013). However, these fidelity measures are resource intensive for research teams in efficacy/effectiveness trials, and are often unattainable or unmanageable for the host organization to rate when the program is implemented on a large scale. We present a first step towards automated processing of linguistic patterns in fidelity monitoring of a behavioral intervention using an innovative mixed methods approach to fidelity assessment that uses rule-based, computational linguistics to overcome major resource burdens. Data come from an effectiveness trial of the Familias Unidas intervention, an evidence-based, family-centered preventive intervention found to be efficacious in reducing conduct problems, substance use and HIV sexual risk behaviors among Hispanic youth. This computational approach focuses on “joining,” which measures the quality of the working alliance of the facilitator with the family. Quantitative assessments of reliability are provided. Kappa scores between a human rater and a machine rater for the new method for measuring joining reached .83. Early findings suggest that this approach can reduce the high cost of fidelity measurement and the time delay between fidelity assessment and feedback to facilitators; it also has the potential for improving the quality of intervention fidelity ratings. PMID:24500022
Virtual estimates of fastening strength for pedicle screw implantation procedures
NASA Astrophysics Data System (ADS)
Linte, Cristian A.; Camp, Jon J.; Augustine, Kurt E.; Huddleston, Paul M.; Robb, Richard A.; Holmes, David R.
2014-03-01
Traditional 2D images provide limited use for accurate planning of spine interventions, mainly due to the complex 3D anatomy of the spine and close proximity of nerve bundles and vascular structures that must be avoided during the procedure. Our previously developed clinician-friendly platform for spine surgery planning takes advantage of 3D pre-operative images, to enable oblique reformatting and 3D rendering of individual or multiple vertebrae, interactive templating, and placement of virtual pedicle implants. Here we extend the capabilities of the planning platform and demonstrate how the virtual templating approach not only assists with the selection of the optimal implant size and trajectory, but can also be augmented to provide surrogate estimates of the fastening strength of the implanted pedicle screws based on implant dimension and bone mineral density of the displaced bone substrate. According to the failure theories, each screw withstands a maximum holding power that is directly proportional to the screw diameter (D), the length of the in-bone segm,ent of the screw (L), and the density (i.e., bone mineral density) of the pedicle body. In this application, voxel intensity is used as a surrogate measure of the bone mineral density (BMD) of the pedicle body segment displaced by the screw. We conducted an initial assessment of the developed platform using retrospective pre- and post-operative clinical 3D CT data from four patients who underwent spine surgery, consisting of a total of 26 pedicle screws implanted in the lumbar spine. The Fastening Strength of the planned implants was directly assessed by estimating the intensity - area product across the pedicle volume displaced by the virtually implanted screw. For post-operative assessment, each vertebra was registered to its homologous counterpart in the pre-operative image using an intensity-based rigid registration followed by manual adjustment. Following registration, the Fastening Strength was computed for each displaced bone segment. According to our preliminary clinical study, a comparison between Fastening Strength, displaced bone volume and mean voxel intensity showed similar results (p < 0.1) between the virtually templated plans and the post-operative outcome following the traditional clinical approach. This study has demonstrated the feasibility of the platform in providing estimates the pedicle screw fastening strength via virtual implantation, given the intrinsic vertebral geometry and bone mineral density, enabling the selection of the optimal implant dimension adn trajectory for improved strength.
Cook, Timothy Wayne; Cavalini, Luciana Tricai
2016-01-01
Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947
LED-based endoscopic light source for spectral imaging
NASA Astrophysics Data System (ADS)
Browning, Craig M.; Mayes, Samuel; Favreau, Peter; Rich, Thomas C.; Leavesley, Silas J.
2016-03-01
Colorectal cancer is the United States 3rd leading cancer in death rates.1 The current screening for colorectal cancer is an endoscopic procedure using white light endoscopy (WLE). There are multiple new methods testing to replace WLE, for example narrow band imaging and autofluorescence imaging.2 However, these methods do not meet the need for a higher specificity or sensitivity. The goal for this project is to modify the presently used endoscope light source to house 16 narrow wavelength LEDs for spectral imaging in real time while increasing sensitivity and specificity. The process to do such was to take an Olympus CLK-4 light source, replace the light and electronics with 16 LEDs and new circuitry. This allows control of the power and intensity of the LEDs. This required a larger enclosure to house a bracket system for the solid light guide (lightpipe), three new circuit boards, a power source and National Instruments hardware/software for computer control. The results were a successfully designed retrofit with all the new features. The LED testing resulted in the ability to control each wavelength's intensity. The measured intensity over the voltage range will provide the information needed to couple the camera for imaging. Overall the project was successful; the modifications to the light source added the controllable LEDs. This brings the research one step closer to the main goal of spectral imaging for early detection of colorectal cancer. Future goals will be to connect the camera and test the imaging process.
Race, gender, and information technology use: the new digital divide.
Jackson, Linda A; Zhao, Yong; Kolenic, Anthony; Fitzgerald, Hiram E; Harold, Rena; Von Eye, Alexander
2008-08-01
This research examined race and gender differences in the intensity and nature of IT use and whether IT use predicted academic performance. A sample of 515 children (172 African Americans and 343 Caucasian Americans), average age 12 years old, completed surveys as part of their participation in the Children and Technology Project. Findings indicated race and gender differences in the intensity of IT use; African American males were the least intense users of computers and the Internet, and African American females were the most intense users of the Internet. Males, regardless of race, were the most intense videogame players, and females, regardless of race, were the most intense cell phone users. IT use predicted children's academic performance. Length of time using computers and the Internet was a positive predictor of academic performance, whereas amount of time spent playing videogames was a negative predictor. Implications of the findings for bringing IT to African American males and bringing African American males to IT are discussed.
Operating Policies and Procedures of Computer Data-Base Systems.
ERIC Educational Resources Information Center
Anderson, David O.
Speaking on the operating policies and procedures of computer data bases containing information on students, the author divides his remarks into three parts: content decisions, data base security, and user access. He offers nine recommended practices that should increase the data base's usefulness to the user community: (1) the cost of developing…
A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription
ERIC Educational Resources Information Center
Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.
2012-01-01
The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…
Displaying Special Characters and Symbols in Computer-Controlled Reaction Time Experiments.
ERIC Educational Resources Information Center
Friel, Brian M.; Kennison, Shelia M.
A procedure for using MEL2 (Version 2.0 of Microcomputer Experimental Laboratory) and FontWINDOW to present special characters and symbols in computer-controlled reaction time experiments is described. The procedure permits more convenience and flexibility than in tachistocopic and projection techniques. FontWINDOW allows researchers to design…
NASA Astrophysics Data System (ADS)
Han, Taehee
A new technology to perform a minimally invasive cornea reshaping procedure has been developed. This can eliminate the incidence of the flap-related complications of the conventional eye refractive procedures by multiphoton processes using a very high-intensity (I ≥ 1013 W/cm 2), but low energy (Ep ˜ 100-200 microJ) femtosecond laser pulses. Due to much lower energy than that of the nanosecond laser pulses for the thermal photoablation, the multiphoton processes cause almost no collateral damage by heat and shock wave generation. In this method, a series of femtosecond laser pulses is used to create very narrow (< 30 microm) and sufficiently long (≥ 2.5 mm) micro-channels in the cornea. The micro-channels are oriented almost perpendicular to the eye's optical axis. Once the micro-channel reaches a desired length, another series of femtosecond pulses with higher intensity is efficiently delivered through the micro-channel to the endpoint where a certain amount of the stromal tissue is disintegrated by the multiphoton processes. The disintegrated fragments are ejected out of the cornea via the same micro-channel, allowing the corneal surface to collapse, and changing its refractive power. This new corneal reshaping method obviates any process of damaging the corneal surface layer, while retaining the advantages of the conventional refractive procedures such as Laser in situ keratomileusis (LASIK) and Photorefractive keratectomy (PRK). In order to demonstrate the flapless cornea reshaping procedure, we have conducted ex-vivo experiments on fresh porcine eyes. The reshaped corneas were evaluated by using optical coherence tomography (OCT). The test results have shown that this flapless intrastromal procedure can reshape the cornea as intended with almost no surface damage. We have also performed a series of experiments to demonstrate the multiphoton processes in the corneal tissue by very high-intensity femtosecond laser pulses. Through the optical emission spectroscopy, we investigated the spectral lines of calcium atom and ions from the femtosecond laser-induced plasma from the porcine corneal tissue. The experimental results have shown the intensity-dependence of ablation rate, which qualitatively verifies the characteristics of the multiphoton processes.
De Georgia, Michael A.; Kaffashi, Farhad; Jacono, Frank J.; Loparo, Kenneth A.
2015-01-01
There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes. PMID:25734185
De Georgia, Michael A; Kaffashi, Farhad; Jacono, Frank J; Loparo, Kenneth A
2015-01-01
There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes.
Determination of the Fracture Parameters in a Stiffened Composite Panel
NASA Technical Reports Server (NTRS)
Lin, Chung-Yi
2000-01-01
A modified J-integral, namely the equivalent domain integral, is derived for a three-dimensional anisotropic cracked solid to evaluate the stress intensity factor along the crack front using the finite element method. Based on the equivalent domain integral method with auxiliary fields, an interaction integral is also derived to extract the second fracture parameter, the T-stress, from the finite element results. The auxiliary fields are the two-dimensional plane strain solutions of monoclinic materials with the plane of symmetry at x(sub 3) = 0 under point loads applied at the crack tip. These solutions are expressed in a compact form based on the Stroh formalism. Both integrals can be implemented into a single numerical procedure to determine the distributions of stress intensity factor and T-stress components, T11, T13, and thus T33, along a three-dimensional crack front. The effects of plate thickness and crack length on the variation of the stress intensity factor and T-stresses through the thickness are investigated in detail for through-thickness center-cracked plates (isotropic and orthotropic) and orthotropic stiffened panels under pure mode-I loading conditions. For all the cases studied, T11 remains negative. For plates with the same dimensions, a larger size of crack yields larger magnitude of the normalized stress intensity factor and normalized T-stresses. The results in orthotropic stiffened panels exhibit an opposite trend in general. As expected, for the thicker panels, the fracture parameters evaluated through the thickness, except the region near the free surfaces, approach two-dimensional plane strain solutions. In summary, the numerical methods presented in this research demonstrate their high computational effectiveness and good numerical accuracy in extracting these fracture parameters from the finite element results in three-dimensional cracked solids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
NASA Technical Reports Server (NTRS)
Dorband, John E.
1987-01-01
Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.
Deformable registration of CT and cone-beam CT with local intensity matching.
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-07
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Deformable registration of CT and cone-beam CT with local intensity matching
NASA Astrophysics Data System (ADS)
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-01
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Research in mathematical theory of computation. [computer programming applications
NASA Technical Reports Server (NTRS)
Mccarthy, J.
1973-01-01
Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.
ERIC Educational Resources Information Center
Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.
2016-01-01
Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…
Discrete crack growth analysis methodology for through cracks in pressurized fuselage structures
NASA Technical Reports Server (NTRS)
Potyondy, David O.; Wawrzynek, Paul A.; Ingraffea, Anthony R.
1994-01-01
A methodology for simulating the growth of long through cracks in the skin of pressurized aircraft fuselage structures is described. Crack trajectories are allowed to be arbitrary and are computed as part of the simulation. The interaction between the mechanical loads acting on the superstructure and the local structural response near the crack tips is accounted for by employing a hierarchical modeling strategy. The structural response for each cracked configuration is obtained using a geometrically nonlinear shell finite element analysis procedure. Four stress intensity factors, two for membrane behavior and two for bending using Kirchhoff plate theory, are computed using an extension of the modified crack closure integral method. Crack trajectories are determined by applying the maximum tangential stress criterion. Crack growth results in localized mesh deletion, and the deletion regions are remeshed automatically using a newly developed all-quadrilateral meshing algorithm. The effectiveness of the methodology and its applicability to performing practical analyses of realistic structures is demonstrated by simulating curvilinear crack growth in a fuselage panel that is representative of a typical narrow-body aircraft. The predicted crack trajectory and fatigue life compare well with measurements of these same quantities from a full-scale pressurized panel test.
Current Status on Radiation Modeling for the Hayabusa Re-entry
NASA Technical Reports Server (NTRS)
Winter, Michael W.; McDaniel, Ryan D.; Chen, Yih-Kang; Liu, Yen; Saunders, David
2011-01-01
On June 13, 2010 the Japanese Hayabusa capsule performed its reentry into the Earths atmosphere over Australia after a seven year journey to the asteroid Itokawa. The reentry was studied by numerous imaging and spectroscopic instruments onboard NASA's DC-8 Airborne Laboratory and from three sites on the ground, in order to measure surface and plasma radiation generated by the Hayabusa Sample Return Capsule (SRC). Post flight, the flow solutions were recomputed to include the whole flow field around the capsule at 11 points along the reentry trajectory using updated trajectory information. Again, material response was taken into account to obtain most reliable surface temperature information. These data will be used to compute thermal radiation of the glowing heat shield and plasma radiation by the shock/post-shock layer system to support analysis of the experimental observation data. For this purpose, lines of sight data are being extracted from the flow field volume grids and plasma radiation will be computed using NEQAIR [4] which is a line-by-line spectroscopic code with one-dimensional transport of radiation intensity. The procedures being used were already successfully applied to the analysis of the observation of the Stardust reentry [5].
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array.
Phillips, Zachary F; D'Ambrosio, Michael V; Tian, Lei; Rulison, Jared J; Patel, Hurshal S; Sadras, Nitin; Gande, Aditya V; Switz, Neil A; Fletcher, Daniel A; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array
Phillips, Zachary F.; D'Ambrosio, Michael V.; Tian, Lei; Rulison, Jared J.; Patel, Hurshal S.; Sadras, Nitin; Gande, Aditya V.; Switz, Neil A.; Fletcher, Daniel A.; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities. PMID:25969980
Challenges to the development of complex virtual reality surgical simulations.
Seymour, N E; Røtnes, J S
2006-11-01
Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.
Potentials of Mean Force With Ab Initio Mixed Hamiltonian Models of Solvation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupuis, Michel; Schenter, Gregory K.; Garrett, Bruce C.
2003-08-01
We give an account of a computationally tractable and efficient procedure for the calculation of potentials of mean force using mixed Hamiltonian models of electronic structure where quantum subsystems are described with computationally intensive ab initio wavefunctions. The mixed Hamiltonian is mapped into an all-classical Hamiltonian that is amenable to a thermodynamic perturbation treatment for the calculation of free energies. A small number of statistically uncorrelated (solute-solvent) configurations are selected from the Monte Carlo random walk generated with the all-classical Hamiltonian approximation. Those are used in the averaging of the free energy using the mixed quantum/classical Hamiltonian. The methodology ismore » illustrated for the micro-solvated SN2 substitution reaction of methyl chloride by hydroxide. We also compare the potential of mean force calculated with the above protocol with an approximate formalism, one in which the potential of mean force calculated with the all-classical Hamiltonian is simply added to the energy of the isolated (non-solvated) solute along the reaction path. Interestingly the latter approach is found to be in semi-quantitative agreement with the full mixed Hamiltonian approximation.« less
Automatic approach to deriving fuzzy slope positions
NASA Astrophysics Data System (ADS)
Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi
2018-03-01
Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. A fully nonlinear continuum approach capable of accounting for both finite rotations and large deformations has been used to model a flexible beam component. The beam kinematics are referred directly to an inertial reference frame such that the degrees of freedom embody both the rigid and flexible deformation motions. As such, the beam inertia expression is identical to that of rigid body dynamics. The nonlinear coupling between gross body motion and elastic deformation is contained in the internal force expression. Numerical solution procedures for the integration of spatial kinematic systems can be directily applied to the generalized coordinates of both the rigid and flexible components. An accurate computation of the internal force term which is invariant to rigid motions is incorporated into the general solution procedure.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1989-01-01
A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.
NASA Technical Reports Server (NTRS)
Kvaternik, Raymond G.; Silva, Walter A.
2008-01-01
A computational procedure for identifying the state-space matrices corresponding to discrete bilinear representations of nonlinear systems is presented. A key feature of the method is the use of first- and second-order Volterra kernels (first- and second-order pulse responses) to characterize the system. The present method is based on an extension of a continuous-time bilinear system identification procedure given in a 1971 paper by Bruni, di Pillo, and Koch. The analytical and computational considerations that underlie the original procedure and its extension to the title problem are presented and described, pertinent numerical considerations associated with the process are discussed, and results obtained from the application of the method to a variety of nonlinear problems from the literature are presented. The results of these exploratory numerical studies are decidedly promising and provide sufficient credibility for further examination of the applicability of the method.
Puntillo, Kathleen; Arai, Shoshana R; Cooper, Bruce A; Stotts, Nancy A; Nelson, Judith E
2014-09-01
To test an intervention bundle for thirst intensity, thirst distress, and dry mouth, which are among the most pervasive, intense, distressful, unrecognized, and undertreated symptoms in ICU patients, but for which data-based interventions are lacking. This was a single-blinded randomized clinical trial in three ICUs in a tertiary medical center in urban California. A total of 252 cognitively intact patients reporting thirst intensity (TI) and/or thirst distress (TD) scores ≥3 on 0-10 numeric rating scales (NRS) were randomized to intervention or usual care groups. A research team nurse (RTN#1) obtained patients' pre-procedure TI and TD scores and reports of dry mouth. She then administered a thirst bundle to the intervention group: oral swab wipes, sterile ice-cold water sprays, and a lip moisturizer, or observed patients in the usual care group. RTN#2, blinded to group assignment, obtained post-procedure TI and TD scores. Up to six sessions per patient were conducted across 2 days. Multilevel linear regression determined that the average decreases in TI and TD scores from pre-procedure to post-procedure were significantly greater in the intervention group (2.3 and 1.8 NRS points, respectively) versus the usual care group (0.6 and 0.4 points, respectively) (p < 0.05). The usual care group was 1.9 times more likely than the intervention group to report dry mouth for each additional session on day 1. This simple, inexpensive thirst bundle significantly decreased ICU patients' thirst and dry mouth and can be considered a practice intervention for patients experiencing thirst.
Guedj, Romain; Danan, Claude; Daoud, Patrick; Zupan, Véronique; Renolleau, Sylvain; Zana, Elodie; Aizenfisz, Sophie; Lapillonne, Alexandre; de Saint Blanquat, Laure; Granier, Michèle; Durand, Philippe; Castela, Florence; Coursol, Anne; Hubert, Philippe; Cimerman, Patricia; Anand, K J S; Khoshnood, Babak; Carbajal, Ricardo
2014-01-01
Objective To determine whether analgesic use for painful procedures performed in neonates in the neonatal intensive care unit (NICU) differs during nights and days and during each of the 6 h period of the day. Design Conducted as part of the prospective observational Epidemiology of Painful Procedures in Neonates study which was designed to collect in real time and around-the-clock bedside data on all painful or stressful procedures. Setting 13 NICUs and paediatric intensive care units in the Paris Region, France. Participants All 430 neonates admitted to the participating units during a 6-week period between September 2005 and January 2006. Data collection During the first 14 days of admission, data were collected on all painful procedures and analgesic therapy. The five most frequent procedures representing 38 012 of all 42 413 (90%) painful procedures were analysed. Intervention Observational study. Main outcome assessment We compared the use of specific analgesic for procedures performed during each of the 6 h period of a day: morning (7:00 to 12:59), afternoon, early night and late night and during daytime (morning+afternoon) and night-time (early night+late night). Results 7724 of 38 012 (20.3%) painful procedures were carried out with a specific analgesic treatment. For morning, afternoon, early night and late night, respectively, the use of analgesic was 25.8%, 18.9%, 18.3% and 18%. The relative reduction of analgesia was 18.3%, p<0.01, between daytime and night-time and 28.8%, p<0.001, between morning and the rest of the day. Parental presence, nurses on 8 h shifts and written protocols for analgesia were associated with a decrease in this difference. Conclusions The substantial differences in the use of analgesics around-the-clock may be questioned on quality of care grounds. PMID:24556241
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
FUNCTION GENERATOR FOR ANALOGUE COMPUTERS
Skramstad, H.K.; Wright, J.H.; Taback, L.
1961-12-12
An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)
Computer assisted surgery with 3D robot models and visualisation of the telesurgical action.
Rovetta, A
2000-01-01
This paper deals with the support of virtual reality computer action in the procedures of surgical robotics. Computer support gives a direct representation of the surgical theatre. The modelization of the procedure in course and in development gives a psychological reaction towards safety and reliability. Robots similar to the ones used by the manufacturing industry can be used with little modification as very effective surgical tools. They have high precision, repeatability and are versatile in integrating with the medical instrumentation. Now integrated surgical rooms, with computer and robot-assisted intervention, are operating. The computer is the element for a decision taking aid, and the robot works as a very effective tool.
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.
Computer Security: The Human Element.
ERIC Educational Resources Information Center
Guynes, Carl S.; Vanacek, Michael T.
1981-01-01
The security and effectiveness of a computer system are dependent on the personnel involved. Improved personnel and organizational procedures can significantly reduce the potential for computer fraud. (Author/MLF)
Home - Virginia Department of Forensic Science
Procedure Manuals Training Manuals Digital & Multimedia Evidence Computer Analysis Video Analysis Procedure Manual Training Manual FAQ Updates Firearms & Toolmarks Procedure Manuals Training Manuals Forensic Biology Procedure Manuals Training Manuals Familial Searches Post-Conviction DNA Issues FAQ
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Finck, Marlène; Ponce, Frédérique; Guilbaud, Laurent; Chervier, Cindy; Floch, Franck; Cadoré, Jean-Luc; Chuzel, Thomas; Hugonnard, Marine
2015-02-01
There are no evidence-based guidelines as to whether computed tomography (CT) or endoscopy should be selected as the first-line procedure when a nasal tumor is suspected in a dog or a cat and only one examination can be performed. Computed tomography and rhinoscopic features of 17 dogs and 5 cats with a histopathologically or cytologically confirmed nasal tumor were retrospectively reviewed. The level of suspicion for nasal neoplasia after CT and/or rhinoscopy was compared to the definitive diagnosis. Twelve animals underwent CT, 14 underwent rhinoscopy, and 4 both examinations. Of the 12 CT examinations performed, 11 (92%) resulted in the conclusion that a nasal tumor was the most likely diagnosis compared with 9/14 (64%) for rhinoscopies. Computed tomography appeared to be more reliable than rhinoscopy for detecting nasal tumors and should therefore be considered as the first-line procedure.
Finck, Marlène; Ponce, Frédérique; Guilbaud, Laurent; Chervier, Cindy; Floch, Franck; Cadoré, Jean-Luc; Chuzel, Thomas; Hugonnard, Marine
2015-01-01
There are no evidence-based guidelines as to whether computed tomography (CT) or endoscopy should be selected as the first-line procedure when a nasal tumor is suspected in a dog or a cat and only one examination can be performed. Computed tomography and rhinoscopic features of 17 dogs and 5 cats with a histopathologically or cytologically confirmed nasal tumor were retrospectively reviewed. The level of suspicion for nasal neoplasia after CT and/or rhinoscopy was compared to the definitive diagnosis. Twelve animals underwent CT, 14 underwent rhinoscopy, and 4 both examinations. Of the 12 CT examinations performed, 11 (92%) resulted in the conclusion that a nasal tumor was the most likely diagnosis compared with 9/14 (64%) for rhinoscopies. Computed tomography appeared to be more reliable than rhinoscopy for detecting nasal tumors and should therefore be considered as the first-line procedure. PMID:25694669
From serological to computer cross-matching in nine hospitals.
Georgsen, J; Kristensen, T
1998-01-01
In 1991 it was decided to reorganise the transfusion service of the County of Funen. The aims were to standardise and improve the quality of blood components, laboratory procedures and the transfusion service and to reduce the number of outdated blood units. Part of the efficiency gains was reinvested in a dedicated computer system making it possible--among other things--to change the cross-match procedures from serological to computer cross-matching according to the ABCD-concept. This communication describes how this transition was performed in terms of laboratory techniques, education of personnel as well as implementation of the computer system and indicates the results obtained. The Funen Transfusion Service has by now performed more than 100.000 red cell transfusions based on ABCD-cross-matching and has not encountered any problems. Major results are the significant reductions of cross-match procedures, blood grouping as well as the number of outdated blood components.
Code of Federal Regulations, 2011 CFR
2011-07-01
... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...
Code of Federal Regulations, 2010 CFR
2010-07-01
... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...
Code of Federal Regulations, 2013 CFR
2013-07-01
... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...
Code of Federal Regulations, 2012 CFR
2012-07-01
... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...
Natural flow and water consumption in the Milk River basin, Montana and Alberta, Canada
Thompson, R.E.
1986-01-01
A study was conducted to determine the differences between natural and nonnatural Milk River streamflow, to delineate and quantify the types and effects of water consumption on streamflow, and to refine the current computation procedure into one which computes and apportions natural flow. Water consumption consists principally of irrigated agriculture, municipal use, and evapotranspiration. Mean daily water consumption by irrigation ranged from 10 cu ft/sec to 26 cu ft/sec in the Canada part and from 6 cu ft/sec to 41 cu ft/sec in the US part. Two Canadian municipalities consume about 320 acre-ft and one US municipality consumes about 20 acre-ft yearly. Evaporation from the water surface comprises 80% 0 90% of the flow reduction in the Milk River attributed to total evapotranspiration. The current water-budget approach for computing natural flow of the Milk River where it reenters the US was refined into an interim procedure which includes allowances for man-induced consumption and a method for apportioning computed natural flow between the US and Canada. The refined procedure is considered interim because further study of flow routing, tributary inflow, and man-induced consumption is needed before a more accurate procedure for computing natural flow can be developed. (Author 's abstract)
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Structural Weight Estimation for Launch Vehicles
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd
2002-01-01
This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.
ERIC Educational Resources Information Center
King, Kenneth M.
1988-01-01
Discussion of the recent computer virus attacks on computers with vulnerable operating systems focuses on the values of educational computer networks. The need for computer security procedures is emphasized, and the ethical use of computer hardware and software is discussed. (LRW)
NASA Technical Reports Server (NTRS)
Guruswamy, Guru
2004-01-01
A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.
The anatomy of floating shock fitting. [shock waves computation for flow field
NASA Technical Reports Server (NTRS)
Salas, M. D.
1975-01-01
The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Kesav, Praveen; Vrinda, S L; Sukumaran, Sajith; Sarma, P S; Sylaja, P N
2017-09-15
This study aimed to assess the feasibility of professional based conventional speech language therapy (SLT) either alone (Group A/less intensive) or assisted by novel computer based local language software (Group B/more intensive) for rehabilitation in early post stroke aphasia. Comprehensive Stroke Care Center of a tertiary health care institute situated in South India, with the study design being prospective open randomised controlled trial with blinded endpoint evaluation. This study recruited 24 right handed first ever acute ischemic stroke patients above 15years of age affecting middle cerebral artery territory within 90days of stroke onset with baseline Western Aphasia Battery (WAB) Aphasia Quotient (AQ) score of <93.8 between September 2013 and January 2016.The recruited subjects were block randomised into either Group A/less intensive or Group B/more intensive therapy arms, in order to receive 12 therapy sessions of conventional professional based SLT of 1h each in both groups, with an additional 12h of computer based language therapy in Group B over 4weeks on a thrice weekly basis, with a follow up WAB performed at four and twelve weeks after baseline assessment. The trial was registered with Clinical trials registry India [2016/08/0120121]. All the statistical analysis was carried out with IBM SPSS Statistics for Windows version 21. 20 subjects [14 (70%) Males; Mean age: 52.8years±SD12.04] completed the study (9 in the less intensive and 11 in the more intensive arm). The mean four weeks follow up AQ showed a significant improvement from the baseline in the total group (p value: 0.01). The rate of rise of AQ from the baseline to four weeks follow up (ΔAQ %) showed a significantly greater value for the less intensive treatment group as against the more intensive treatment group [155% (SD: 150; 95% CI: 34-275) versus 52% (SD: 42%; 95% CI: 24-80) respectively: p value: 0.053]. Even though the more intensive treatment arm incorporating combined professional based SLT and computer software based training fared poorer than the less intensive therapy group, this study nevertheless reinforces the feasibility of SLT in augmenting recovery of early post stroke aphasia. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Li-Juan; Wan, Wenchao; Karton, Amir
2016-11-01
We evaluate the performance of standard and modified MPn procedures for a wide set of thermochemical and kinetic properties, including atomization energies, structural isomerization energies, conformational energies, and reaction barrier heights. The reference data are obtained at the CCSD(T)/CBS level by means of the Wn thermochemical protocols. We find that none of the MPn-based procedures show acceptable performance for the challenging W4-11 and BH76 databases. For the other thermochemical/kinetic databases, the MP2.5 and MP3.5 procedures provide the most attractive accuracy-to-computational cost ratios. The MP2.5 procedure results in a weighted-total-root-mean-square deviation (WTRMSD) of 3.4 kJ/mol, whilst the computationally more expensive MP3.5 procedure results in a WTRMSD of 1.9 kJ/mol (the same WTRMSD obtained for the CCSD(T) method in conjunction with a triple-zeta basis set). We also assess the performance of the computationally economical CCSD(T)/CBS(MP2) method, which provides the best overall performance for all the considered databases, including W4-11 and BH76.
NASA Technical Reports Server (NTRS)
Dahlback, Arne; Stamnes, Knut
1991-01-01
Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.
A Writing-Intensive Program for Teaching Retail Management.
ERIC Educational Resources Information Center
Darian, Jean C.; And Others
1992-01-01
Presents the writing-intensive design for a retailing management course developed by its instructor in accordance with writing-across-the-curriculum principles. Provides an overview of the semester-long project. Details project procedures for preparatory activities, field research, and writing the marketing plan. (SR)
Impedance computations and beam-based measurements: A problem of discrepancy
Smaluk, Victor
2018-04-21
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less
Impedance computations and beam-based measurements: A problem of discrepancy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smaluk, Victor
High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less
The Montage architecture for grid-enabled science processing of large, distributed datasets
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui
2004-01-01
Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.
HyperForest: A high performance multi-processor architecture for real-time intelligent systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.
1997-04-01
Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less
Pazhur, R J; Kutter, B; Georgieff, M; Schraag, S
2003-06-01
Portable digital assistants (PDAs) may be of value to the anaesthesiologist as development in medical care is moving towards "bedside computing". Many different portable computers are currently available and it is now possible for the physician to carry a mobile computer with him all the time. It is data base, reference book, patient tracking help, date planner, computer, book, magazine, calculator and much more in one mobile device. With the help of a PDA, information that is required for our work may be available at all times and everywhere at the point of care within seconds. In this overview the possibilities for the use of PDAs in anaesthesia and intensive care medicine are discussed. Developments in other countries, possibilities in use but also problems such as data security and network technology are evaluated.