Sample records for length reducing computing

  1. Minimized state complexity of quantum-encoded cryptic processes

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.

    2016-05-01

    The predictive information required for proper trajectory sampling of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one. This recent discovery allows quantum information processing to drastically reduce the memory necessary to simulate complex classical stochastic processes. It also points to a new perspective on the intrinsic complexity that nature must employ in generating the processes we observe. The quantum advantage increases with codeword length: the length of process sequences used in constructing the quantum communication scheme. In analogy with the classical complexity measure, statistical complexity, we use this reduced communication cost as an entropic measure of state complexity in the quantum representation. Previously difficult to compute, the quantum advantage is expressed here in closed form using spectral decomposition. This allows for efficient numerical computation of the quantum-reduced state complexity at all encoding lengths, including infinite. Additionally, it makes clear how finite-codeword reduction in state complexity is controlled by the classical process's cryptic order, and it allows asymptotic analysis of infinite-cryptic-order processes.

  2. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  3. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  4. Failure analysis of fuel cell electrodes using three-dimensional multi-length scale X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.

    2016-10-01

    X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.

  5. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  6. High-precision laser distance measurement in support of lunar laser ranging at Haleakala, Maui, 1976-1977

    NASA Technical Reports Server (NTRS)

    Berg, E.; Carter, J. A.; Harris, D.; Laurila, S. H.; Schenck, B. E.; Sutton, G. H.; Wolfe, J. E.; Cushman, S. E.

    1978-01-01

    The Hawaii Institute of Geophysics has implemented a comprehensive geodetic-geophysical support program to monitor local and regional crustal deformation on the island of Maui. Presented are the actual laser-measured line lengths and new coordinate computations of the line terminals, and the internal consistency of the measured line lengths is discussed. Several spacial chord lengths have been reduced to a Mercator plane, and conditioned adjustments on that plane have been made.

  7. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  8. Aligner optimization increases accuracy and decreases compute times in multi-species sequence data.

    PubMed

    Robinson, Kelly M; Hawkins, Aziah S; Santana-Cruz, Ivette; Adkins, Ricky S; Shetty, Amol C; Nagaraj, Sushma; Sadzewicz, Lisa; Tallon, Luke J; Rasko, David A; Fraser, Claire M; Mahurkar, Anup; Silva, Joana C; Dunning Hotopp, Julie C

    2017-09-01

    As sequencing technologies have evolved, the tools to analyze these sequences have made similar advances. However, for multi-species samples, we observed important and adverse differences in alignment specificity and computation time for bwa- mem (Burrows-Wheeler aligner-maximum exact matches) relative to bwa-aln. Therefore, we sought to optimize bwa-mem for alignment of data from multi-species samples in order to reduce alignment time and increase the specificity of alignments. In the multi-species cases examined, there was one majority member (i.e. Plasmodium falciparum or Brugia malayi ) and one minority member (i.e. human or the Wolbachia endosymbiont w Bm) of the sequence data. Increasing bwa-mem seed length from the default value reduced the number of read pairs from the majority sequence member that incorrectly aligned to the reference genome of the minority sequence member. Combining both source genomes into a single reference genome increased the specificity of mapping, while also reducing the central processing unit (CPU) time. In Plasmodium , at a seed length of 18 nt, 24.1 % of reads mapped to the human genome using 1.7±0.1 CPU hours, while 83.6 % of reads mapped to the Plasmodium genome using 0.2±0.0 CPU hours (total: 107.7 % reads mapping; in 1.9±0.1 CPU hours). In contrast, 97.1 % of the reads mapped to a combined Plasmodium- human reference in only 0.7±0.0 CPU hours. Overall, the results suggest that combining all references into a single reference database and using a 23 nt seed length reduces the computational time, while maximizing specificity. Similar results were found for simulated sequence reads from a mock metagenomic data set. We found similar improvements to computation time in a publicly available human-only data set.

  9. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  10. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  11. Lateral mode coupling to reduce the electrical impedance of small elements required for high power ultrasound therapy phased arrays.

    PubMed

    Hynynen, Kullervo; Yin, Jianhua

    2009-03-01

    A method that uses lateral coupling to reduce the electrical impedance of small transducer elements in generating ultrasound waves was tested. Cylindrical, radially polled transducer elements were driven at their length resonance frequency. Computer simulation and experimental studies showed that the electrical impedance of the transducer element could be controlled by the cylinder wall thickness, while the operation frequency was determined by the cylinder length. Acoustic intensity (averaged over the cylinder diameter) over 10 W / cm(2) (a therapeutically relevant intensity) was measured from these elements.

  12. Ultra high speed image processing techniques. [electronic packaging techniques

    NASA Technical Reports Server (NTRS)

    Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.

    1981-01-01

    Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.

  13. LORAN-C LATITUDE-LONGITUDE CONVERSION AT SEA: PROGRAMMING CONSIDERATIONS.

    USGS Publications Warehouse

    McCullough, James R.; Irwin, Barry J.; Bowles, Robert M.

    1985-01-01

    Comparisons are made of the precision of arc-length routines as computer precision is reduced. Overland propagation delays are discussed and illustrated with observations from offshore New England. Present practice of LORAN-C error budget modeling is then reviewed with the suggestion that additional terms be considered in future modeling. Finally, some detailed numeric examples are provided to help with new computer program checkout.

  14. Computational Fluid Dynamics Based Extraction of Heat Transfer Coefficient in Cryogenic Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from: inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. The work presented under this task uses the first-principles based Computational Fluid Dynamics (CFD) technique to compute heat transfer from tank wall to the cryogenic fluids, and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between tank wall and cryogenic propellant, and that between tank wall and ullage gas were then simulated. The results showed that commonly used heat transfer correlations for either vertical or horizontal plate over predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.

  15. Efficient algorithms for single-axis attitude estimation

    NASA Technical Reports Server (NTRS)

    Shuster, M. D.

    1981-01-01

    The computationally efficient algorithms determine attitude from the measurement of art lengths and dihedral angles. The dependence of these algorithms on the solution of trigonometric equations was reduced. Both single time and batch estimators are presented along with the covariance analysis of each algorithm.

  16. Non-linear wave phenomena in Josephson elements for superconducting electronics

    NASA Astrophysics Data System (ADS)

    Christiansen, P. L.; Parmentier, R. D.; Skovgaard, O.

    1985-07-01

    The long and intermediate length Josephson tunnel junction oscillator with overlap geometry of linear and circular configuration, is investigated by computational solution of the perturbed sine-Gordon equation model and by experimental measurements. The model predicts the experimental results very well. Line oscillators as well as ring oscillators are treated. For long junctions soliton perturbation methods are developed and turn out to be efficient prediction tools, also providing physical understanding of the dynamics of the oscillator. For intermediate length junctions expansions in terms of linear cavity modes reduce computational costs. The narrow linewidth of the electromagnetic radiation (typically 1 kHz of a line at 10 GHz) is demonstrated experimentally. Corresponding computer simulations requiring a relative accuracy of less than 10 to the -7th power are performed on supercomputer CRAY-1-S. The broadening of linewidth due to external microradiation and internal thermal noise is determined.

  17. Analysis and optimization of RC delay in vertical nanoplate FET

    NASA Astrophysics Data System (ADS)

    Woo, Changbeom; Ko, Kyul; Kim, Jongsu; Kim, Minsoo; Kang, Myounggon; Shin, Hyungcheol

    2017-10-01

    In this paper, we have analyzed short channel effects (SCEs) and RC delay with Vertical nanoplate FET (VNFET) using 3-D Technology computer-aided design (TCAD) simulation. The device is based on International Technology Road-map for Semiconductor (ITRS) 2013 recommendations, and it has initially gate length (LG) of 12.2 nm, channel thickness (Tch) of 4 nm, and spacer length (LSD) of 6 nm. To obtain improved performance by reducing RC delay, each dimension is adjusted (LG = 12.2 nm, Tch = 6 nm, LSD = 11.9 nm). It has each characteristic in this dimension (Ion/Ioff = 1.64 × 105, Subthreshold swing (S.S.) = 73 mV/dec, Drain-induced barrier lowering (DIBL) = 60 mV/V, and RC delay = 0.214 ps). Furthermore, with long shallow trench isolation (STI) length and thick insulator thickness (Ti), we can reduce RC delay from 0.214 ps to 0.163 ps. It is about a 23.8% reduction. Without decreasing drain current, there is a reduction of RC delay as reducing outer fringing capacitance (Cof). Finally, when source/drain spacer length is set to be different, we have verified RC delay to be optimum.

  18. 78 FR 40823 - Reports, Forms, and Record Keeping Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... at time of approval. Title: National Survey of Principal Drivers of Vehicles with a Rear Seat Belt... from both groups and information on their passengers seat belt usage habits, as well as the... use computer-assisted telephone interviewing to reduce interview length and minimize recording errors...

  19. Improving computer security for authentication of users: influence of proactive password restrictions.

    PubMed

    Proctor, Robert W; Lien, Mei-Ching; Vu, Kim-Phuong L; Schultz, E Eugene; Salvendy, Gavriel

    2002-05-01

    Entering a username-password combination is a widely used procedure for identification and authentication in computer systems. However, it is a notoriously weak method, in that the passwords adopted by many users are easy to crack. In an attempt to improve security, proactive password checking may be used, in which passwords must meet several criteria to be more resistant to cracking. In two experiments, we examined the influence of proactive password restrictions on the time that it took to generate an acceptable password and to use it subsequently to long in. The required length was a minimum of five characters in Experiment 1 and eight characters in Experiment 2. In both experiments, one condition had only the length restriction, and the other had additional restrictions. The additional restrictions greatly increased the time it took to generate the password but had only a small effect on the time it took to use it subsequently to long in. For the five-character passwords, 75% were cracked when no other restrictions were imposed, and this was reduced to 33% with the additional restrictions. For the eight-character passwords, 17% were cracked with no other restrictions, and 12.5% with restrictions. The results indicate that increasing the minimum character length reduces crackability and increases security, regardless of whether additional restrictions are imposed.

  20. Computational and experimental study of airflow around a fan powered UVGI lamp

    NASA Astrophysics Data System (ADS)

    Kaligotla, Srikar; Tavakoli, Behtash; Glauser, Mark; Ahmadi, Goodarz

    2011-11-01

    The quality of indoor air environment is very important for improving the health of occupants and reducing personal exposure to hazardous pollutants. An effective way of controlling air quality is by eliminating the airborne bacteria and viruses or by reducing their emissions. Ultraviolet Germicidal Irradiation (UVGI) lamps can effectively reduce these bio-contaminants in an indoor environment, but the efficiency of these systems depends on airflow in and around the device. UVGI lamps would not be as effective in stagnant environments as they would be when the moving air brings the bio-contaminant in their irradiation region. Introducing a fan into the UVGI system would augment the efficiency of the system's kill rate. Airflows in ventilated spaces are quite complex due to the vast range of length and velocity scales. The purpose of this research is to study these complex airflows using CFD techniques and validate computational model with airflow measurements around the device using Particle Image Velocimetry measurements. The experimental results including mean velocities, length scales and RMS values of fluctuating velocities are used in the CFD validation. Comparison of these data at different locations around the device with the CFD model predictions are performed and good agreement was observed.

  1. 5 CFR 838.441 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.441... Affecting Refunds of Employee Contributions Procedures for Computing the Amount Payable § 838.441 Computing lengths of service. (a) The smallest unit of time that OPM will calculate in computing a formula in a...

  2. 5 CFR 838.242 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.242... Affecting Employee Annuities Procedures for Computing the Amount Payable § 838.242 Computing lengths of service. (a)(1) The smallest unit of time that OPM will calculate in computing a formula in a court order...

  3. Boundary-layer computational model for predicting the flow and heat transfer in sudden expansions

    NASA Technical Reports Server (NTRS)

    Lewis, J. P.; Pletcher, R. H.

    1986-01-01

    Fully developed turbulent and laminar flows through symmetric planar and axisymmetric expansions with heat transfer were modeled using a finite-difference discretization of the boundary-layer equations. By using the boundary-layer equations to model separated flow in place of the Navier-Stokes equations, computational effort was reduced permitting turbulence modelling studies to be economically carried out. For laminar flow, the reattachment length was well predicted for Reynolds numbers as low as 20 and the details of the trapped eddy were well predicted for Reynolds numbers above 200. For turbulent flows, the Boussinesq assumption was used to express the Reynolds stresses in terms of a turbulent viscosity. Near-wall algebraic turbulence models based on Prandtl's-mixing-length model and the maximum Reynolds shear stress were compared.

  4. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  5. Skeletal dosimetry: A hyperboloid representation of the bone-marrow interface to reduce voxel effects in three-dimensional images of trabecular bone

    NASA Astrophysics Data System (ADS)

    Rajon, Didier Alain

    Radiation damage to the hematopoietic bone marrow is clearly defined as the limiting factor to the development of internal emitter therapies. Current dosimetry models rely on chord-length distributions measured through the complex microstructure of the trabecular bone regions of the skeleton in which most of the active marrow is located. Recently, Nuclear Magnetic Resonance (NMR) has been used to obtain high-resolution three-dimensional (3D) images of small trabecular bone samples. These images have been coupled with computer programs to estimate dosimetric parameters such as chord-length distributions, and energy depositions by monoenergetic electrons. This new technique is based on the assumption that each voxel of the image is assigned either to bone tissue or to marrow tissue after application of a threshold value. Previous studies showed that this assumption had important consequences on the outcome of the computer calculations. Both the chord-length distribution measurements and the energy deposition calculations are subject to voxel effects that are responsible for large discrepancies when applied to mathematical models of trabecular bone. The work presented in this dissertation proposes first a quantitative study of the voxel effects. Consensus is that the voxelized representation of surfaces should not be used as direct input to dosimetry computer programs. Instead we need a new technique to transform the interfaces into smooth surfaces. The Marching Cube (MC) algorithm was used and adapted to do this transformation. The initial image was used to generate a continuous gray-level field throughout the image. The interface between bone and marrow was then simulated by the iso-gray-level surface that corresponds to a predetermined threshold value. Calculations were then performed using this new representation. Excellent results were obtained for both the chord-length distribution and the energy deposition measurements. Voxel effects were reduced to an acceptable level and the discrepancies found when using the voxelized representation of the interface were reduced to a few percent. We conclude that this new model should be used every time one performs dosimetry estimates using NMR images of trabecular bone samples.

  6. 5 CFR 838.623 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.623 Section 838.623 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE... Employee Annuities or Refunds of Employee Contributions Computation of Benefits § 838.623 Computing lengths...

  7. Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign

    PubMed Central

    2007-01-01

    Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273

  8. Using Variable-Length Aligned Fragment Pairs and an Improved Transition Function for Flexible Protein Structure Alignment.

    PubMed

    Cao, Hu; Lu, Yonggang

    2017-01-01

    With the rapid growth of known protein 3D structures in number, how to efficiently compare protein structures becomes an essential and challenging problem in computational structural biology. At present, many protein structure alignment methods have been developed. Among all these methods, flexible structure alignment methods are shown to be superior to rigid structure alignment methods in identifying structure similarities between proteins, which have gone through conformational changes. It is also found that the methods based on aligned fragment pairs (AFPs) have a special advantage over other approaches in balancing global structure similarities and local structure similarities. Accordingly, we propose a new flexible protein structure alignment method based on variable-length AFPs. Compared with other methods, the proposed method possesses three main advantages. First, it is based on variable-length AFPs. The length of each AFP is separately determined to maximally represent a local similar structure fragment, which reduces the number of AFPs. Second, it uses local coordinate systems, which simplify the computation at each step of the expansion of AFPs during the AFP identification. Third, it decreases the number of twists by rewarding the situation where nonconsecutive AFPs share the same transformation in the alignment, which is realized by dynamic programming with an improved transition function. The experimental data show that compared with FlexProt, FATCAT, and FlexSnap, the proposed method can achieve comparable results by introducing fewer twists. Meanwhile, it can generate results similar to those of the FATCAT method in much less running time due to the reduced number of AFPs.

  9. Multidimensional CAT Item Selection Methods for Domain Scores and Composite Scores: Theory and Applications

    ERIC Educational Resources Information Center

    Yao, Lihua

    2012-01-01

    Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure…

  10. How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation

    ERIC Educational Resources Information Center

    Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard

    2006-01-01

    Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…

  11. Comparison of Image Quality and Radiation Dose of Coronary Computed Tomography Angiography Between Conventional Helical Scanning and a Strategy Incorporating Sequential Scanning

    PubMed Central

    Einstein, Andrew J.; Wolff, Steven D.; Manheimer, Eric D.; Thompson, James; Terry, Sylvia; Uretsky, Seth; Pilip, Adalbert; Peters, M. Robert

    2009-01-01

    Radiation dose from coronary computed tomography angiography may be reduced using a sequential scanning protocol rather than a conventional helical scanning protocol. Here we compare radiation dose and image quality from coronary computed tomography angiography in a single center between an initial period during which helical scanning with electrocardiographically-controlled tube current modulation was used for all patients (n=138) and after adoption of a strategy incorporating sequential scanning whenever appropriate (n=261). Using the sequential-if-appropriate strategy, sequential scanning was employed in 86.2% of patients. Compared to the helical-only strategy, this strategy was associated with a 65.1% dose reduction (mean dose-length product of 305.2 vs. 875.1 and mean effective dose of 14.9 mSv vs. 5.2 mSv, respectively), with no significant change in overall image quality, step artifacts, motion artifacts, or perceived image noise. For the 225 patients undergoing sequential scanning, the dose-length product was 201.9 ± 90.0 mGy·cm, while for patients undergoing helical scanning under either strategy, the dose-length product was 890.9 ± 293.3 mGy·cm (p<0.0001), corresponding to mean effective doses of 3.4 mSv and 15.1 mSv, respectively, a 77.5% reduction. Image quality was significantly greater for the sequential studies, reflecting the poorer image quality in patients undergoing helical scanning in the sequential-if-appropriate strategy. In conclusion, a sequential-if-appropriate diagnostic strategy reduces dose markedly compared to a helical-only strategy, with no significant difference in image quality. PMID:19892048

  12. Role of spatial averaging in multicellular gradient sensing.

    PubMed

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  13. Role of spatial averaging in multicellular gradient sensing

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  14. Lift and center of pressure of wing-body-tail combinations at subsonic, transonic, and supersonic speeds

    NASA Technical Reports Server (NTRS)

    Pitts, William C; Nielsen, Jack N; Kaattari, George E

    1957-01-01

    A method is presented for calculating the lift and centers of pressure of wing-body and wing-body-tail combinations at subsonic, transonic, and supersonic speeds. A set of design charts and a computing table are presented which reduce the computations to routine operations. Comparison between the estimated and experimental characteristics for a number of wing-body and wing-body-tail combinations shows correlation to within + or - 10 percent on lift and to within about + or - 0.02 of the body length on center of pressure.

  15. Does Foot Anthropometry Predict Metabolic Cost During Running?

    PubMed

    van Werkhoven, Herman; Piazza, Stephen J

    2017-10-01

    Several recent investigations have linked running economy to heel length, with shorter heels being associated with less metabolic energy consumption. It has been hypothesized that shorter heels require larger plantar flexor muscle forces, thus increasing tendon energy storage and reducing metabolic cost. The goal of this study was to investigate this possible mechanism for metabolic cost reduction. Fifteen male subjects ran at 16 km⋅h -1 on a treadmill and subsequently on a force-plate instrumented runway. Measurements of oxygen consumption, kinematics, and ground reaction forces were collected. Correlational analyses were performed between oxygen consumption and anthropometric and kinetic variables associated with the ankle and foot. Correlations were also computed between kinetic variables (peak joint moment and peak tendon force) and heel length. Estimated peak Achilles tendon force normalized to body weight was found to be strongly correlated with heel length normalized to body height (r = -.751, p = .003). Neither heel length nor any other measured or calculated variable were correlated with oxygen consumption, however. Subjects with shorter heels experienced larger Achilles tendon forces, but these forces were not associated with reduced metabolic cost. No other anthropometric and kinetic variables considered explained the variance in metabolic cost across individuals.

  16. Reynolds-Averaged Navier-Stokes Solutions to Flat Plate Film Cooling Scenarios

    NASA Technical Reports Server (NTRS)

    Johnson, Perry L.; Shyam, Vikram; Hah, Chunill

    2011-01-01

    The predictions of several Reynolds-Averaged Navier-Stokes solutions for a baseline film cooling geometry are analyzed and compared with experimental data. The Fluent finite volume code was used to perform the computations with the realizable k-epsilon turbulence model. The film hole was angled at 35 to the crossflow with a Reynolds number of 17,400. Multiple length-to-diameter ratios (1.75 and 3.5) as well as momentum flux ratios (0.125 and 0.5) were simulated with various domains, boundary conditions, and grid refinements. The coolant to mainstream density ratio was maintained at 2.0 for all scenarios. Computational domain and boundary condition variations show the ability to reduce the computational cost as compared to previous studies. A number of grid refinement and coarsening variations are compared for further insights into the reduction of computational cost. Liberal refinement in the near hole region is valuable, especially for higher momentum jets that tend to lift-off and create a recirculating flow. A lack of proper refinement in the near hole region can severely diminish the accuracy of the solution, even in the far region. The effects of momentum ratio and hole length-to-diameter ratio are also discussed.

  17. Pseudo-orthogonalization of memory patterns for associative memory.

    PubMed

    Oku, Makito; Makino, Takaki; Aihara, Kazuyuki

    2013-11-01

    A new method for improving the storage capacity of associative memory models on a neural network is proposed. The storage capacity of the network increases in proportion to the network size in the case of random patterns, but, in general, the capacity suffers from correlation among memory patterns. Numerous solutions to this problem have been proposed so far, but their high computational cost limits their scalability. In this paper, we propose a novel and simple solution that is locally computable without any iteration. Our method involves XNOR masking of the original memory patterns with random patterns, and the masked patterns and masks are concatenated. The resulting decorrelated patterns allow higher storage capacity at the cost of the pattern length. Furthermore, the increase in the pattern length can be reduced through blockwise masking, which results in a small amount of capacity loss. Movie replay and image recognition are presented as examples to demonstrate the scalability of the proposed method.

  18. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    PubMed

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  19. Multiply Reduced Oligofluorenes: Their Nature and Pairing with THF-Solvated Sodium Ions

    DOE PAGES

    Wu, Qin; Zaikowski, Lori; Kaur, Parmeet; ...

    2016-07-01

    Conjugated oligofluorenes are chemically reduced up to five charges in tetrahydrofuran solvent and confirmed with clear spectroscopic evidence. Stimulated by these experimental results, we have conducted a comprehensive computational study of the electronic structure and the solvation structure of representative oligofluorene anions with a focus on the pairing between sodium ions and these multianions. In addition, using density functional theory (DFT) methods and a solvation model of both explicit solvent molecules and implicit polarizable continuum, we first elucidate the structure of tightly solvated free sodium ions, and then explore the pairing of sodium ions either in contact with reduced oligofluorenesmore » or as solvent-separated ion pairs. Computed time-dependent-DFT absorption spectra are compared with experiments to assign the dominant ion pairing structure for each multianion. Computed ion pair binding energies further support our assignment. Lastly, the availability of different length and reducing level of oligofluorenes enables us to investigate the effects of total charge and charge density on the binding with sodium ions, and our results suggest both factors play important roles in ion pairing for small molecules. However, as the oligofluorene size grows, its charge density determines the binding strength with the sodium ion.« less

  20. Spatial filtering and spatial primitives in early vision: an explanation of the Zöllner-Judd class of geometrical illusion.

    PubMed

    Morgan, M J; Casco, C

    1990-10-22

    The apparent length and orientation of short lines is altered when they abut against oblique lines (the Zöllner and Judd illusions). Here we present evidence that the length and orientation biases are geometrically related and probably depend upon the same underlying mechanism. Measurements were done with an 'H' figure, in which the apparent length and orientation of the cross-bar was assessed by the method of adjustment while the orientation of the outer flanking lines was varied. When the flanking lines are oblique the apparent length of the central line is reduced and its orientation is shifted so that it appears more nearly at right-angles to the obliques than is in fact the case. Measurements of the orientation and length effects were made in three observers, over a range of flanking-line angles (90, 63, 45, 34 and 27 deg) and central line lengths (9, 17, 33 and 67 arc min). The biases increased with the tilt of the flanking-lines, and decreased with central line length. The extent of the length bias could be accurately predicted from the angular shift by simple trigonometry. We describe physiological and computational models to account for the relation between the orientation and length biases.

  1. A computational algorithm addressing how vessel length might depend on vessel diameter

    Treesearch

    Jing Cai; Shuoxin Zhang; Melvin T. Tyree

    2010-01-01

    The objective of this method paper was to examine a computational algorithm that may reveal how vessel length might depend on vessel diameter within any given stem or species. The computational method requires the assumption that vessels remain approximately constant in diameter over their entire length. When this method is applied to three species or hybrids in the...

  2. Atom probe field ion microscopy and related topics: A bibliography 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godfrey, R.D.; Miller, M.K.; Russell, K.F.

    1994-10-01

    This bibliography, covering the period 1993, includes references related to the following topics: atom probe field ion microscopy (APFIM), field emission (FE), and field ion microscopy (FIM). Technique-oriented studies and applications are included. The references contained in this document were compiled from a variety of sources including computer searches and personal lists of publications. To reduce the length of this document, the references have been reduced to the minimum necessary to locate the articles. The references are listed alphabetically by authors, an Addendum of references missed in previous bibliographies is included.

  3. A method for brain 3D surface reconstruction from MR images

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin

    2014-09-01

    Due to the encephalic tissues are highly irregular, three-dimensional (3D) modeling of brain always leads to complicated computing. In this paper, we explore an efficient method for brain surface reconstruction from magnetic resonance (MR) images of head, which is helpful to surgery planning and tumor localization. A heuristic algorithm is proposed for surface triangle mesh generation with preserved features, and the diagonal length is regarded as the heuristic information to optimize the shape of triangle. The experimental results show that our approach not only reduces the computational complexity, but also completes 3D visualization with good quality.

  4. Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats

    NASA Astrophysics Data System (ADS)

    Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.

    2018-03-01

    Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.

  5. Analyses of requirements for computer control and data processing experiment subsystems. Volume 1: ATM experiment S-056 image data processing system techniques development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The solar imaging X-ray telescope experiment (designated the S-056 experiment) is described. It will photograph the sun in the far ultraviolet or soft X-ray region. Because of the imaging characteristics of this telescope and the necessity of using special techniques for capturing images on film at these wave lengths, methods were developed for computer processing of the photographs. The problems of image restoration were addressed to develop and test digital computer techniques for applying a deconvolution process to restore overall S-056 image quality. Additional techniques for reducing or eliminating the effects of noise and nonlinearity in S-056 photographs were developed.

  6. SU-F-J-48: Effect of Scan Length On Magnitude of Imaging Dose in KV CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deshpande, S; Naidu, S; Sutar, A

    Purpose: To study effect of scan length on magnitude of imaging dose deposition in Varian kV CBCT for head & neck and pelvis CBCT. Methods: To study effect of scan length we measured imaging dose at depth of 8 cm for head and neck Cone Beam Computed Tomography (CBCT) acquisition ( X ray beam energy is used 100kV and 200 degree of gantry rotation) and at 16 cm depth for pelvis CBCT acquisition ( X ray beam energy used is 125 kV and 360 degree of gantry rotation) in specially designed phantom. We used farmer chamber which was calibrated inmore » kV X ray range for measurements .Dose was measured with default field size, and reducing field size along y direction to 10 cm and 5 cm. Results: As the energy of the beam decreases the scattered radiation increases and this contributes significantly to the dose deposited in the patient. By reducing the scan length to 10 Cm from default 20.6 cm we found a dose reduction of 14% for head and neck CBCT protocol and a reduction of 26% for pelvis CBCT protocol. Similarly for a scan length of 5cm compared to default the dose reduction in head and neck CBCT protocol is 36% while in the pelvis CBCT protocol the dose reduction is 50%. Conclusion: By limiting the scan length we can control the scatter radiation generated and hence the dose to the patient. However the variation in dose reduction for same length used in two protocols is because of the scan geometry. The pelvis CBCT protocol uses a full rotation and head and neck CBCT protocol uses partial rotation.« less

  7. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less

  8. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Advanced k-epsilon modeling of heat transfer

    NASA Technical Reports Server (NTRS)

    Kwon, Okey; Ames, Forrest E.

    1995-01-01

    This report describes two approaches to low Reynolds-number k-epsilon turbulence modeling which formulate the eddy viscosity on the wall-normal component of turbulence and a length scale. The wall-normal component of turbulence is computed via integration of the energy spectrum based on the local dissipation rate and is bounded by the isotropic condition. The models account for the anisotropy of the dissipation and the reduced mixing length due to the high strain rates present in the near-wall region. The turbulent kinetic energy and its dissipation rate were computed from the k and epsilon transport equations of Durbin. The models were tested for a wide range of turbulent flows and proved to be superior to other k-epsilon models, especially for nonequilibrium anisotropic flows. For the prediction of airfoil heat transfer, the models included a set of empirical correlations for predicting laminar-turbulent transition and laminar heat transfer augmentation due to the presence of freestream turbulence. The predictions of surface heat transfer were generally satisfactory.

  10. Structural and dynamic analysis of an ultra short intracavity directional coupler

    NASA Astrophysics Data System (ADS)

    Gravé, Ilan; Griffel, Giora; Daou, Youssef; Golan, Gadi

    1997-01-01

    A recently proposed intracavity directional coupler is analysed. Exact analytic expressions for important parameters such as the transmission ratio, the coupling length, and the photon lifetime are given. We show that by controlling the mirror reflectivities of the cavity, it is theoretically possible to reduce the coupling length to a zero limit. The photon lifetime, which governs the dynamic properties of the structure, sets an upper frequency limit of a few hundreds of GHz, which is well over the bandwidth limitation of microwave lumped or travelling wave electrodes. This novel family of intracavity couplers has important applications in the realization of integrated optics circuits for high-speed computing, data processing, and communication.

  11. On the number of multiplications necessary to compute a length-2 exp n DFT

    NASA Technical Reports Server (NTRS)

    Heideman, M. T.; Burrus, C. S.

    1986-01-01

    The number of multiplications necessary and sufficient to compute a length-2 exp n DFT is determined. The method of derivation is shown to apply to the multiplicative complexity results of Winograd (1980, 1981) for a length-p exp n DFT, for p an odd prime number. The multiplicative complexity of the one-dimensional DFT is summarized for many possible lengths.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qin; Zaikowski, Lori; Kaur, Parmeet

    Conjugated oligofluorenes are chemically reduced up to five charges in tetrahydrofuran solvent and confirmed with clear spectroscopic evidence. Stimulated by these experimental results, we have conducted a comprehensive computational study of the electronic structure and the solvation structure of representative oligofluorene anions with a focus on the pairing between sodium ions and these multianions. In addition, using density functional theory (DFT) methods and a solvation model of both explicit solvent molecules and implicit polarizable continuum, we first elucidate the structure of tightly solvated free sodium ions, and then explore the pairing of sodium ions either in contact with reduced oligofluorenesmore » or as solvent-separated ion pairs. Computed time-dependent-DFT absorption spectra are compared with experiments to assign the dominant ion pairing structure for each multianion. Computed ion pair binding energies further support our assignment. Lastly, the availability of different length and reducing level of oligofluorenes enables us to investigate the effects of total charge and charge density on the binding with sodium ions, and our results suggest both factors play important roles in ion pairing for small molecules. However, as the oligofluorene size grows, its charge density determines the binding strength with the sodium ion.« less

  13. The inertial power and inertial force of robotic and natural bat wing

    NASA Astrophysics Data System (ADS)

    Yin, Dongfu; Zhang, Zhisheng

    2016-03-01

    Based on the acquired length and angle data of bat skeletons, a four-degree freedom robotic bat wing and an identical computational model with flap, sweep, elbow and wrist motions were presented. By considering the digits motions, a biomimetic bat skeleton model with seven-degree freedom was established as well. The effects of frequency, amplitude and downstroke ratio, as well as the components of inertial power and force on different directions, were studied. The experimental and computational results indicated that the inertial power and force accounted for the largest part on flap direction, the wing fold during upstroke could reduce the inertial power and force.

  14. Computational knee ligament modeling using experimentally determined zero-load lengths.

    PubMed

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models.

  15. Super-Cavitating Flow Around Two-Dimensional Conical, Spherical, Disc and Stepped Disc Cavitators

    NASA Astrophysics Data System (ADS)

    Sooraj, S.; Chandrasekharan, Vaishakh; Robson, Rony S.; Bhanu Prakash, S.

    2017-08-01

    A super-cavitating object is a high speed submerged object that is designed to initiate a cavitation bubble at the nose which extends past the aft end of the object, substantially reducing the skin friction drag that would be present if the sides of the object were in contact with the liquid in which the object is submerged. By reducing the drag force the thermal energy consumption to move faster can also be minimised. The super-cavitation behavioural changes with respect to Cavitators of various geometries have been studied by varying the inlet velocity. Two-dimensional computational fluid dynamics analysis has been carried out by applying k-ε turbulence model. The variation of drag coefficient, cavity length with respect to cavitation number and inlet velocity are analyzed. Results showed conical Cavitator with wedge angle of 30° has lesser drag coefficient and cavity length when compared to conical Cavitators with wedge angles 45° and 60°, spherical, disc and stepped disc Cavitators. Conical cavitator 60° and disc cavitator have the maximum cavity length but with higher drag coefficient. Also there is significant variation of supercavitation effect observed between inlet velocities of 32 m/s to 40 m/s.

  16. A modification in the technique of computing average lengths from the scales of fishes

    USGS Publications Warehouse

    Van Oosten, John

    1953-01-01

    In virtually all the studies that employ scales, otollths, or bony structures to obtain the growth history of fishes, it has been the custom to compute lengths for each individual fish and from these data obtain the average growth rates for any particular group. This method involves a considerable amount of mathematical manipulation, time, and effort. Theoretically it should be possible to obtain the same information simply by averaging the scale measurements for each year of life and the length of the fish employed and computing the average lengths from these data. This method would eliminate all calculations for individual fish. Although Van Oosten (1929: 338) pointed out many years ago the validity of this method of computation, his statements apparently have been overlooked by subsequent investigators.

  17. The use of compressive sensing and peak detection in the reconstruction of microtubules length time series in the process of dynamic instability.

    PubMed

    Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A

    2015-10-01

    Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Computed tomography of the lung of healthy snakes of the species Python regius, Boa constrictor, Python reticulatus, Morelia viridis, Epicrates cenchria, and Morelia spilota.

    PubMed

    Pees, Michael; Kiefer, Ingmar; Thielebein, Jens; Oechtering, Gerhard; Krautwald-Junghanns, Maria-Elisabeth

    2009-01-01

    Thirty-nine healthy boid snakes representing six different species (Python regius, Boa constrictor, Python reticulatus, Morelia viridis, Epicrates cenchria, and Morelia spilota) were examined using computed tomography (CT) to characterize the normal appearance of the respiratory tissue. Assessment was done subjectively and densitometry was performed using a defined protocol. The length of the right lung was calculated to be 11.1% of the body length, without a significant difference between species. The length of the left lung in proportion to the right was dependent on the species examined. The most developed left lung was in P. regius (81.2%), whereas in B. constrictor, the left lung was vestigial or absent (24.7%). A median attenuation of -814.6 HU and a variability of 45.9 HU were calculated for all species with no significant difference between species. Within the species, a significantly higher attenuation was found for P. regius in the dorsal and cranial aspect of the lung compared with the ventral and caudal part. In B. constrictor, the reduced left lung was significantly hyperattenuating compared with the right lung. Results of this study emphasize the value of CT and provide basic reference data for assessment of the snake lung in these species. Veterinary Radiology &

  19. Secondary Structure Predictions for Long RNA Sequences Based on Inversion Excursions and MapReduce.

    PubMed

    Yehdego, Daniel T; Zhang, Boyu; Kodimala, Vikram K R; Johnson, Kyle L; Taufer, Michela; Leung, Ming-Ying

    2013-05-01

    Secondary structures of ribonucleic acid (RNA) molecules play important roles in many biological processes including gene expression and regulation. Experimental observations and computing limitations suggest that we can approach the secondary structure prediction problem for long RNA sequences by segmenting them into shorter chunks, predicting the secondary structures of each chunk individually using existing prediction programs, and then assembling the results to give the structure of the original sequence. The selection of cutting points is a crucial component of the segmenting step. Noting that stem-loops and pseudoknots always contain an inversion, i.e., a stretch of nucleotides followed closely by its inverse complementary sequence, we developed two cutting methods for segmenting long RNA sequences based on inversion excursions: the centered and optimized method. Each step of searching for inversions, chunking, and predictions can be performed in parallel. In this paper we use a MapReduce framework, i.e., Hadoop, to extensively explore meaningful inversion stem lengths and gap sizes for the segmentation and identify correlations between chunking methods and prediction accuracy. We show that for a set of long RNA sequences in the RFAM database, whose secondary structures are known to contain pseudoknots, our approach predicts secondary structures more accurately than methods that do not segment the sequence, when the latter predictions are possible computationally. We also show that, as sequences exceed certain lengths, some programs cannot computationally predict pseudoknots while our chunking methods can. Overall, our predicted structures still retain the accuracy level of the original prediction programs when compared with known experimental secondary structure.

  20. Development of a reduced tri-propylene glycol monomethyl ether– n -hexadecane–poly-aromatic hydrocarbon mechanism and its application for soot prediction

    DOE PAGES

    Park, Seunghyun; Ra, Youngchul; Reitz, Rolf D.; ...

    2016-03-01

    A reduced chemical kinetic mechanism for Tri-Propylene Glycol Monomethyl Ether (TPGME) has been developed and applied to computational fluid dynamics (CFD) calculations for predicting combustion and soot formation processes. The reduced TPGME mechanism was combined with a reduced n-hexadecane mechanism and a Poly-Aromatic Hydrocarbon (PAH) mechanism to investigate the effect of fuel oxygenation on combustion and soot emissions. The final version of the TPGME-n-hexadecane-PAH mechanism consists of 144 species and 730 reactions and was validated with experiments in shock tubes as well as in a constant volume spray combustion vessel (CVCV) from the Engine Combustion Network (ECN). The effects ofmore » ambient temperature, varying oxygen content in the tested fuels on ignition delay, spray liftoff length and soot formation under diesel-like conditions were analyzed and addressed using multidimensional reacting flow simulations and the reduced mechanism. Here, the results show that the present reduced mechanism gives reliable predictions of the combustion characteristics and soot formation processes. In the CVCV simulations, two important trends were identified. First, increasing the initial temperature in the CVCV shortens the ignition delay and lift-off length, reduces the fuel-air mixing, thereby increasing the soot levels. Secondly, fuel oxygenation introduces more oxygen into the central region of a fuel jet and reduces residence times of fuel rich area in active soot forming regions, thereby reducing soot levels.« less

  1. Development of a reduced tri-propylene glycol monomethyl ether– n -hexadecane–poly-aromatic hydrocarbon mechanism and its application for soot prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seunghyun; Ra, Youngchul; Reitz, Rolf D.

    A reduced chemical kinetic mechanism for Tri-Propylene Glycol Monomethyl Ether (TPGME) has been developed and applied to computational fluid dynamics (CFD) calculations for predicting combustion and soot formation processes. The reduced TPGME mechanism was combined with a reduced n-hexadecane mechanism and a Poly-Aromatic Hydrocarbon (PAH) mechanism to investigate the effect of fuel oxygenation on combustion and soot emissions. The final version of the TPGME-n-hexadecane-PAH mechanism consists of 144 species and 730 reactions and was validated with experiments in shock tubes as well as in a constant volume spray combustion vessel (CVCV) from the Engine Combustion Network (ECN). The effects ofmore » ambient temperature, varying oxygen content in the tested fuels on ignition delay, spray liftoff length and soot formation under diesel-like conditions were analyzed and addressed using multidimensional reacting flow simulations and the reduced mechanism. Here, the results show that the present reduced mechanism gives reliable predictions of the combustion characteristics and soot formation processes. In the CVCV simulations, two important trends were identified. First, increasing the initial temperature in the CVCV shortens the ignition delay and lift-off length, reduces the fuel-air mixing, thereby increasing the soot levels. Secondly, fuel oxygenation introduces more oxygen into the central region of a fuel jet and reduces residence times of fuel rich area in active soot forming regions, thereby reducing soot levels.« less

  2. Graph Representations of Flow and Transport in Fracture Networks using Machine Learning

    NASA Astrophysics Data System (ADS)

    Srinivasan, G.; Viswanathan, H. S.; Karra, S.; O'Malley, D.; Godinez, H. C.; Hagberg, A.; Osthus, D.; Mohd-Yusof, J.

    2017-12-01

    Flow and transport of fluids through fractured systems is governed by the properties and interactions at the micro-scale. Retaining information about the micro-structure such as fracture length, orientation, aperture and connectivity in mesh-based computational models results in solving for millions to billions of degrees of freedom and quickly renders the problem computationally intractable. Our approach depicts fracture networks graphically, by mapping fractures to nodes and intersections to edges, thereby greatly reducing computational burden. Additionally, we use machine learning techniques to build simulators on the graph representation, trained on data from the mesh-based high fidelity simulations to speed up computation by orders of magnitude. We demonstrate our methodology on ensembles of discrete fracture networks, dividing up the data into training and validation sets. Our machine learned graph-based solvers result in over 3 orders of magnitude speedup without any significant sacrifice in accuracy.

  3. A modified cross-correlation method for white-light optical fiber extrinsic Fabry-Perot interferometric hydrogen sensors

    NASA Astrophysics Data System (ADS)

    Yang, Zhen; Zhang, Min; Liao, Yanbiao; Lai, Shurong; Tian, Qian; Li, Qisheng; Zhang, Yi; Zhuang, Zhi

    2009-11-01

    An extrinsic Fabry-Perot interferometric (EFPI) optical fiber hydrogen sensor based on palladium silver (Pd-Ag) film is designed for hydrogen leakage detection. A modified cross correlation signal processing method for an optical fiber EFPI hydrogen sensor is presented. As the applying of a special correlating factor which advises the effect on the fringe visibility of the gap length and wavelength, the cross correlation method has a high accuracy which is insensitive to light source power drift or changes in attenuation in the fiber, and the segment search method is employed to reduce computation and demodulating speed is fast. The Fabry-Perot gap length resolution of better than 0.2nm is achieved in a certain concentration of hydrogen.

  4. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  5. Computational Knee Ligament Modeling Using Experimentally Determined Zero-Load Lengths

    PubMed Central

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models. PMID:22523522

  6. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  7. Essentially Entropic Lattice Boltzmann Model

    NASA Astrophysics Data System (ADS)

    Atif, Mohammad; Kolluru, Praveen Kumar; Thantanapally, Chakradhar; Ansumali, Santosh

    2017-12-01

    The entropic lattice Boltzmann model (ELBM), a discrete space-time kinetic theory for hydrodynamics, ensures nonlinear stability via the discrete time version of the second law of thermodynamics (the H theorem). Compliance with the H theorem is numerically enforced in this methodology and involves a search for the maximal discrete path length corresponding to the zero dissipation state by iteratively solving a nonlinear equation. We demonstrate that an exact solution for the path length can be obtained by assuming a natural criterion of negative entropy change, thereby reducing the problem to solving an inequality. This inequality is solved by creating a new framework for construction of Padé approximants via quadrature on appropriate convex function. This exact solution also resolves the issue of indeterminacy in case of nonexistence of the entropic involution step. Since our formulation is devoid of complex mathematical library functions, the computational cost is drastically reduced. To illustrate this, we have simulated a model setup of flow over the NACA-0012 airfoil at a Reynolds number of 2.88 ×106.

  8. A MHD channel study for the ETF conceptual design

    NASA Technical Reports Server (NTRS)

    Wang, S. Y.; Staiger, P. J.; Smith, J. M.

    1981-01-01

    The procedures and computations used to identify an MHD channel for a 540 mW(I) EFT-scale plant are presented. Under the assumed constraints of maximum E(x), E(y), J(y) and Beta; results show the best plant performance is obtained for active length, L is approximately 12 M, whereas in the initial ETF studies, L is approximately 16 M. As MHD channel length is reduced from 16 M, the channel enthalpy extraction falls off, slowly. This tends to reduce the MHD power output; however, the shorter channels result in lower heat losses to the MHD channel cooling water which allows for the incorporation of more low pressure boiler feedwater heaters into the system and an increase in steam plant efficiency. The net result of these changes is a net increase in the over all MHD/steam plant efficiency. In addition to the sensitivity of various channel parameters, the trade-offs between the level of oxygen enrichment and the electrical stress on the channel are also discussed.

  9. Distributed support modelling for vertical track dynamic analysis

    NASA Astrophysics Data System (ADS)

    Blanco, B.; Alonso, A.; Kari, L.; Gil-Negrete, N.; Giménez, J. G.

    2018-04-01

    The finite length nature of rail-pad supports is characterised by a Timoshenko beam element formulation over an elastic foundation, giving rise to the distributed support element. The new element is integrated into a vertical track model, which is solved in frequency and time domain. The developed formulation is obtained by solving the governing equations of a Timoshenko beam for this particular case. The interaction between sleeper and rail via the elastic connection is considered in an analytical, compact and efficient way. The modelling technique results in realistic amplitudes of the 'pinned-pinned' vibration mode and, additionally, it leads to a smooth evolution of the contact force temporal response and to reduced amplitudes of the rail vertical oscillation, as compared to the results from concentrated support models. Simulations are performed for both parametric and sinusoidal roughness excitation. The model of support proposed here is compared with a previous finite length model developed by other authors, coming to the conclusion that the proposed model gives accurate results at a reduced computational cost.

  10. A MHD channel study for the ETF conceptual design

    NASA Astrophysics Data System (ADS)

    Wang, S. Y.; Staiger, P. J.; Smith, J. M.

    The procedures and computations used to identify an MHD channel for a 540 mW(I) EFT-scale plant are presented. Under the assumed constraints of maximum E(x), E(y), J(y) and Beta; results show the best plant performance is obtained for active length, L is approximately 12 M, whereas in the initial ETF studies, L is approximately 16 M. As MHD channel length is reduced from 16 M, the channel enthalpy extraction falls off, slowly. This tends to reduce the MHD power output; however, the shorter channels result in lower heat losses to the MHD channel cooling water which allows for the incorporation of more low pressure boiler feedwater heaters into the system and an increase in steam plant efficiency. The net result of these changes is a net increase in the over all MHD/steam plant efficiency. In addition to the sensitivity of various channel parameters, the trade-offs between the level of oxygen enrichment and the electrical stress on the channel are also discussed.

  11. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  12. Accurate atomistic potentials and training sets for boron-nitride nanostructures

    NASA Astrophysics Data System (ADS)

    Tamblyn, Isaac

    Boron nitride nanotubes exhibit exceptional structural, mechanical, and thermal properties. They are optically transparent and have high thermal stability, suggesting a wide range of opportunities for structural reinforcement of materials. Modeling can play an important role in determining the optimal approach to integrating nanotubes into a supporting matrix. Developing accurate, atomistic scale models of such nanoscale interfaces embedded within composites is challenging, however, due to the mismatch of length scales involved. Typical nanotube diameters range from 5-50 nm, with a length as large as a micron (i.e. a relevant length-scale for structural reinforcement). Unlike their carbon-based counterparts, well tested and transferable interatomic force fields are not common for BNNT. In light of this, we have developed an extensive training database of BN rich materials, under conditions relevant for BNNT synthesis and composites based on extensive first principles molecular dynamics simulations. Using this data, we have produced an artificial neural network potential capable of reproducing the accuracy of first principles data at significantly reduced computational cost, allowing for accurate simulation at the much larger length scales needed for composite design.

  13. Does Content Knowledge Affect TOEFL iBT[TM] Reading Performance? A Confirmatory Approach to Differential Item Functioning. TOEFL iBT Research Report. RR-09-29

    ERIC Educational Resources Information Center

    Liu, Ou Lydia; Schedl, Mary; Malloy, Jeanne; Kong, Nan

    2009-01-01

    The TOEFL iBT[TM] has increased the length of the reading passages in the reading section compared to the passages on the TOEFL[R] computer-based test (CBT) to better approximate academic reading in North American universities, resulting in a reduced number of passages in the reading test. A concern arising from this change is whether the decrease…

  14. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  15. paraGSEA: a scalable approach for large-scale gene expression profiling

    PubMed Central

    Peng, Shaoliang; Yang, Shunyun

    2017-01-01

    Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463

  16. Sound propagation in street canyons: comparison between diffusely and geometrically reflecting boundaries

    PubMed

    Kang

    2000-03-01

    This paper systematically compares the sound fields in street canyons with diffusely and geometrically reflecting boundaries. For diffuse boundaries, a radiosity-based theoretical/computer model has been developed. For geometrical boundaries, the image source method has been used. Computations using the models show that there are considerable differences between the sound fields resulting from the two kinds of boundaries. By replacing diffuse boundaries with geometrical boundaries, the sound attenuation along the length becomes significantly less; the RT30 is considerably longer; and the extra attenuation caused by air or vegetation absorption is reduced. There are also some similarities between the sound fields under the two boundary conditions. For example, in both cases the sound attenuation along the length with a given amount of absorption is the highest if the absorbers are arranged on one boundary and the lowest if they are evenly distributed on all boundaries. Overall, the results suggest that, from the viewpoint of urban noise reduction, it is better to design the street boundaries as diffusely reflective rather than acoustically smooth.

  17. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  18. An explanation of the relationship between mass, metabolic rate and characteristic length for placental mammals

    PubMed Central

    2015-01-01

    The Mass, Metabolism and Length Explanation (MMLE) was advanced in 1984 to explain the relationship between metabolic rate and body mass for birds and mammals. This paper reports on a modernized version of MMLE. MMLE deterministically computes the absolute value of Basal Metabolic Rate (BMR) and body mass for individual animals. MMLE is thus distinct from other examinations of these topics that use species-averaged data to estimate the parameters in a statistically best fit power law relationship such as BMR = a(bodymass)b. Beginning with the proposition that BMR is proportional to the number of mitochondria in an animal, two primary equations are derived that compute BMR and body mass as functions of an individual animal’s characteristic length and sturdiness factor. The characteristic length is a measureable skeletal length associated with an animal’s means of propulsion. The sturdiness factor expresses how sturdy or gracile an animal is. Eight other parameters occur in the equations that vary little among animals in the same phylogenetic group. The present paper modernizes MMLE by explicitly treating Froude and Strouhal dynamic similarity of mammals’ skeletal musculature, revising the treatment of BMR and using new data to estimate numerical values for the parameters that occur in the equations. A mass and length data set with 575 entries from the orders Rodentia, Chiroptera, Artiodactyla, Carnivora, Perissodactyla and Proboscidea is used. A BMR and mass data set with 436 entries from the orders Rodentia, Chiroptera, Artiodactyla and Carnivora is also used. With the estimated parameter values MMLE can calculate characteristic length and sturdiness factor values so that every BMR and mass datum from the BMR and mass data set can be computed exactly. Furthermore MMLE can calculate characteristic length and sturdiness factor values so that every body mass and length datum from the mass and length data set can be computed exactly. Whether or not MMLE can calculate a sturdiness factor value so that an individual animal’s BMR and body mass can be simultaneously computed given its characteristic length awaits analysis of a data set that simultaneously reports all three of these items for individual animals. However for many of the addressed MMLE homogeneous groups, MMLE can predict the exponent obtained by regression analysis of the BMR and mass data using the exponent obtained by regression analysis of the mass and length data. This argues that MMLE may be able to accurately simultaneously compute BMR and mass for an individual animal. PMID:26355655

  19. Aberration compensation of an ultrasound imaging instrument with a reduced number of channels.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey P; Waag, Robert C

    2012-10-01

    Focusing and imaging qualities of an ultrasound imaging system that uses aberration correction were experimentally investigated as functions of the number of parallel channels. Front-end electronics that consolidate signals from multiple physical elements can be used to lower hardware and computational costs by reducing the number of parallel channels. However, the signals from sparse arrays of synthetic elements yield poorer aberration estimates. In this study, aberration estimates derived from synthetic arrays of varying element sizes are evaluated by comparing compensated receive focuses, compensated transmit focuses, and compensated b-scan images of a point target and a cyst phantom. An array of 80 x 80 physical elements with a pitch of 0.6 x 0.6 mm was used for all of the experiments and the aberration was produced by a phantom selected to mimic propagation through abdominal wall. The results show that aberration correction derived from synthetic arrays with pitches that have a diagonal length smaller than 70% of the correlation length of the aberration yield focuses and images of approximately the same quality. This connection between correlation length of the aberration and synthetic element size provides a guideline for determining the number of parallel channels that are required when designing imaging systems that employ aberration correction.

  20. Analysis of DC and analog/RF performance on Cyl-GAA-TFET using distinct device geometry

    NASA Astrophysics Data System (ADS)

    Vishvakarma, S. K.; Beohar, Ankur; Vijayvargiya, Vikas; Trivedi, Priyal

    2017-07-01

    In this paper, analysis of DC and analog/RF performance on cylindrical gate-all-around tunnel field-effect transistor (TFET) has been made using distinct device geometry. Firstly, performance parameters of GAA-TFET are analyzed in terms of drain current, gate capacitances, transconductance, source-drain conductance at different radii and channel length. Furthermore, we also produce the geometrical analysis towards the optimized investigation of radio frequency parameters like cut-off frequency, maximum oscillation frequency and gain bandwidth product using a 3D technology computer-aided design ATLAS. Due to band-to-band tunneling based current mechanism unlike MOSFET, gate-bias dependence values as primary parameters of TFET differ. We also analyze that the maximum current occurs when radii of Si is around 8 nm due to high gate controllability over channel with reduced fringing effects and also there is no change in the current of TFET on varying its length from 100 to 40 nm. However current starts to increase when channel length is further reduced for 40 to 30 nm. Both of these trades-offs affect the RF performance of the device. Project supported by the Council of Scientific and Industrial Research (CSIR) Funded Research Project, Grant No. 22/0651/14/EMR-II, Government of India.

  1. The importance of structural anisotropy in computational models of traumatic brain injury.

    PubMed

    Carlsen, Rika W; Daphalapurkar, Nitin P

    2015-01-01

    Understanding the mechanisms of injury might prove useful in assisting the development of methods for the management and mitigation of traumatic brain injury (TBI). Computational head models can provide valuable insight into the multi-length-scale complexity associated with the primary nature of diffuse axonal injury. It involves understanding how the trauma to the head (at the centimeter length scale) translates to the white-matter tissue (at the millimeter length scale), and even further down to the axonal-length scale, where physical injury to axons (e.g., axon separation) may occur. However, to accurately represent the development of TBI, the biofidelity of these computational models is of utmost importance. There has been a focused effort to improve the biofidelity of computational models by including more sophisticated material definitions and implementing physiologically relevant measures of injury. This paper summarizes recent computational studies that have incorporated structural anisotropy in both the material definition of the white matter and the injury criterion as a means to improve the predictive capabilities of computational models for TBI. We discuss the role of structural anisotropy on both the mechanical response of the brain tissue and on the development of injury. We also outline future directions in the computational modeling of TBI.

  2. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    NASA Astrophysics Data System (ADS)

    Vincenti, H.; Lobet, M.; Lehe, R.; Sasanka, R.; Vay, J.-L.

    2017-01-01

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈ 20 pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scatter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of × 2 to × 2.5 speed-up in double precision for particle shape factor of orders 1- 3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles).

  3. Nanowire size dependence on sensitivity of silicon nanowire field-effect transistor-based pH sensor

    NASA Astrophysics Data System (ADS)

    Lee, Ryoongbin; Kwon, Dae Woong; Kim, Sihyun; Kim, Sangwan; Mo, Hyun-Sun; Kim, Dae Hwan; Park, Byung-Gook

    2017-12-01

    In this study, we investigated the effects of nanowire size on the current sensitivity of silicon nanowire (SiNW) ion-sensitive field-effect transistors (ISFETs). The changes in on-current (I on) and resistance according to pH were measured in fabricated SiNW ISFETs of various lengths and widths. As a result, it was revealed that the sensitivity expressed as relative I on change improves as the width decreases. Through technology computer-aided design (TCAD) simulation analysis, the width dependence on the relative I on change can be explained by the observation that the target molecules located at the edge region along the channel width have a stronger effect on the sensitivity as the SiNW width is reduced. Additionally, the length dependence on the sensitivity can be understood in terms of the resistance ratio of the fixed parasitic resistance, including source/drain resistance, to the varying channel resistance as a function of channel length.

  4. Multilocus Association Mapping Using Variable-Length Markov Chains

    PubMed Central

    Browning, Sharon R.

    2006-01-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests. PMID:16685642

  5. Multilocus association mapping using variable-length Markov chains.

    PubMed

    Browning, Sharon R

    2006-06-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests.

  6. Portable imaging system method and apparatus

    DOEpatents

    Freifeld, Barry M.; Kneafsley, Timothy J.; Pruess, Jacob; Tomutsa, Liviu; Reiter, Paul A.; deCastro, Ted M.

    2006-07-25

    An operator shielded X-ray imaging system has sufficiently low mass (less than 300 kg) and is compact enough to enable portability by reducing operator shielding requirements to a minimum shielded volume. The resultant shielded volume may require a relatively small mass of shielding in addition to the already integrally shielded X-ray source, intensifier, and detector. The system is suitable for portable imaging of well cores at remotely located well drilling sites. The system accommodates either small samples, or small cross-sectioned objects of unlimited length. By rotating samples relative to the imaging device, the information required for computer aided tomographic reconstruction may be obtained. By further translating the samples relative to the imaging system, fully three dimensional (3D) tomographic reconstructions may be obtained of samples having arbitrary length.

  7. Spatial correlation of the dynamic propensity of a glass-forming liquid

    NASA Astrophysics Data System (ADS)

    Razul, M. Shajahan G.; Matharoo, Gurpreet S.; Poole, Peter H.

    2011-06-01

    We present computer simulation results on the dynamic propensity (as defined by Widmer-Cooper et al 2004 Phys. Rev. Lett. 93 135701) in a Kob-Andersen binary Lennard-Jones liquid system consisting of 8788 particles. We compute the spatial correlation function for the dynamic propensity as a function of both the reduced temperature T, and the time scale on which the particle displacements are measured. For T <= 0.6, we find that non-zero correlations occur at the largest length scale accessible in our system. We also show that a cluster-size analysis of particles with extremal values of the dynamic propensity, as well as 3D visualizations, reveal spatially correlated regions that approach the size of our system as T decreases, consistently with the behavior of the spatial correlation function. Next, we define and examine the 'coordination propensity', the isoconfigurational average of the coordination number of the minority B particles around the majority A particles. We show that a significant correlation exists between the spatial fluctuations of the dynamic and coordination propensities. In addition, we find non-zero correlations of the coordination propensity occurring at the largest length scale accessible in our system for all T in the range 0.466 < T < 1.0. We discuss the implications of these results for understanding the length scales of dynamical heterogeneity in glass-forming liquids.

  8. A 32-bit NMOS microprocessor with a large register file

    NASA Astrophysics Data System (ADS)

    Sherburne, R. W., Jr.; Katevenis, M. G. H.; Patterson, D. A.; Sequin, C. H.

    1984-10-01

    Two scaled versions of a 32-bit NMOS reduced instruction set computer CPU, called RISC II, have been implemented on two different processing lines using the simple Mead and Conway layout rules with lambda values of 2 and 1.5 microns (corresponding to drawn gate lengths of 4 and 3 microns), respectively. The design utilizes a small set of simple instructions in conjunction with a large register file in order to provide high performance. This approach has resulted in two surprisingly powerful single-chip processors.

  9. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  10. Analysis of electrical characteristics and proposal of design guide for ultra-scaled nanoplate vertical FET and 6T-SRAM

    NASA Astrophysics Data System (ADS)

    Seo, Youngsoo; Kim, Shinkeun; Ko, Kyul; Woo, Changbeom; Kim, Minsoo; Lee, Jangkyu; Kang, Myounggon; Shin, Hyungcheol

    2018-02-01

    In this paper, electrical characteristics of gate-all-around (GAA) nanoplate (NP) vertical FET (VFET) were analyzed for single transistor and 6T-SRAM cell through 3D technology computer-aided design (TCAD) simulation. In VFET, gate and extension lengths are not limited by the area of device because theses lengths are vertically located. The height of NP is assumed in 40 nm considering device fabrication method (top-down approach). According to the sizes of devices, we analyzed the performances of device such as total resistance, capacitance, intrinsic gate delay, sub-threshold swing (S.S), drain-induced barrier lowering (DIBL) and static noise margin (SNM). As the gate length becomes larger, the resistance should be smaller because the total height of NP is fixed in 40 nm. Also, when the channel thickness becomes thicker, the total resistance becomes smaller since the sheet resistances of channel and extension become smaller and the contact resistance becomes smaller due to the increasing contact area. In addition, as the length of channel pitch increases, the parasitic capacitance comes to be larger due to the increasing area of gate-drain and gate-source. The performance of RC delay is best in the shortest gate length (12 nm), the thickest channel (6 nm) and the shortest channel pitch (17 nm) owing to the reduced resistance and parasitic capacitance. However, the other performances such as DIBL, S.S, on/off ratio and SNM are worst because the short channel effect is highest in this situation. Also, we investigated the performance of the multi-channel device. As the number of channels increases, the performance of device and the reliability of SRAM improve because of reduced contact resistance, increased gate dimension and multi-channel compensation effect.

  11. Project APhiD: A Lorenz-gauged A-Φ decomposition for parallelized computation of ultra-broadband electromagnetic induction in a fully heterogeneous Earth

    NASA Astrophysics Data System (ADS)

    Weiss, Chester J.

    2013-08-01

    An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.

  12. Elliptic Length Scales in Laminar, Two-Dimensional Supersonic Flows

    DTIC Science & Technology

    2015-06-01

    sophisticated computational fluid dynamics ( CFD ) methods. Additionally, for 3D interactions, the length scales would require determination in spanwise as well...Manna, M. “Experimental, Analytical, and Computational Methods Applied to Hypersonic Compression Ramp Flows,” AIAA Journal, Vol. 32, No. 2, Feb. 1994

  13. Effective Coulomb force modeling for spacecraft in Earth orbit plasmas

    NASA Astrophysics Data System (ADS)

    Seubert, Carl R.; Stiles, Laura A.; Schaub, Hanspeter

    2014-07-01

    Coulomb formation flight is a concept that utilizes electrostatic forces to control the separations of close proximity spacecraft. The Coulomb force between charged bodies is a product of their size, separation, potential and interaction with the local plasma environment. A fast and accurate analytic method of capturing the interaction of a charged body in a plasma is shown. The Debye-Hückel analytic model of the electrostatic field about a charged sphere in a plasma is expanded to analytically compute the forces. This model is fitted to numerical simulations with representative geosynchronous and low Earth orbit (GEO and LEO) plasma environments using an effective Debye length. This effective Debye length, which more accurately captures the charge partial shielding, can be up to 7 times larger at GEO, and as great as 100 times larger at LEO. The force between a sphere and point charge is accurately captured with the effective Debye length, as opposed to the electron Debye length solutions that have errors exceeding 50%. One notable finding is that the effective Debye lengths in LEO plasmas about a charged body are increased from centimeters to meters. This is a promising outcome, as the reduced shielding at increased potentials provides sufficient force levels for operating the electrostatically inflated membrane structures concept at these dense plasma altitudes.

  14. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    ERIC Educational Resources Information Center

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  15. Electroosmosis in a Finite Cylindrical Pore: Simple Models of End Effects

    PubMed Central

    2015-01-01

    A theoretical model of electroosmosis through a circular pore of radius a that traverses a membrane of thickness h is investigated. Both the cylindrical surface of the pore and the outer surfaces of the membrane are charged. When h ≫ a, end effects are negligible, and the results of full numerical computations of electroosmosis in an infinite pore agree with theory. When h = 0, end effects dominate, and computations again agree with analysis. For intermediate values of h/a, an approximate analysis that combines these two limiting cases captures the main features of computational results when the Debye length κ–1 is small compared with the pore radius a. However, the approximate analysis fails when κ–1 ≫ a, when the charge cloud due to the charged cylindrical walls of the pore spills out of the ends of the pore, and the electroosmotic flow is reduced. When this spilling out is included in the analysis, agreement with computation is restored. PMID:25020257

  16. Ferruleless coupled-cavity traveling-wave tube cold-test characteristics simulated with micro-SOS

    NASA Technical Reports Server (NTRS)

    Schroeder, Dana L.; Wilson, Jeffrey D.

    1993-01-01

    The three-dimensional, electromagnetic circuit analysis code, Micro-SOS, can be used to reduce expensive and time consuming experimental 'cold-testing' of traveling-wave tube (TWT) circuits. The frequency-phase dispersion and beam interaction impedance characteristics of a ferruleless coupled-cavity traveling-wave tube slow-wave circuit were simulated using the code. Computer results agree closely with experimental data. Variations in the cavity geometry dimensions of period length and gap-to-period ratio were modeled. These variations can be used in velocity taper designs to reduce the radiofrequency (RF) phase velocity in synchronism with the decelerating electron beam. Such circuit designs can result in enhanced TWT power and efficiency.

  17. Numerical simulation of the vortical flow around a pitching airfoil

    NASA Astrophysics Data System (ADS)

    Fu, Xiang; Li, Gaohua; Wang, Fuxin

    2017-04-01

    In order to study the dynamic behaviors of the flapping wing, the vortical flow around a pitching NACA0012 airfoil is investigated. The unsteady flow field is obtained by a very efficient zonal procedure based on the velocity-vorticity formulation and the Reynolds number based on the chord length of the airfoil is set to 1 million. The zonal procedure divides up the whole computation domain in to three zones: potential flow zone, boundary layer zone and Navier-Stokes zone. Since the vorticity is absent in the potential flow zone, the vorticity transport equation needs only to be solved in the boundary layer zone and Navier-Stokes zone. Moreover, the boundary layer equations are solved in the boundary layer zone. This arrangement drastically reduces the computation time against the traditional numerical method. After the flow field computation, the evolution of the vortices around the airfoil is analyzed in detail.

  18. Two-step simulation of velocity and passive scalar mixing at high Schmidt number in turbulent jets

    NASA Astrophysics Data System (ADS)

    Rah, K. Jeff; Blanquart, Guillaume

    2016-11-01

    Simulation of passive scalar in the high Schmidt number turbulent mixing process requires higher computational cost than that of velocity fields, because the scalar is associated with smaller length scales than velocity. Thus, full simulation of both velocity and passive scalar with high Sc for a practical configuration is difficult to perform. In this work, a new approach to simulate velocity and passive scalar mixing at high Sc is suggested to reduce the computational cost. First, the velocity fields are resolved by Large Eddy Simulation (LES). Then, by extracting the velocity information from LES, the scalar inside a moving fluid blob is simulated by Direct Numerical Simulation (DNS). This two-step simulation method is applied to a turbulent jet and provides a new way to examine a scalar mixing process in a practical application with smaller computational cost. NSF, Samsung Scholarship.

  19. Standardization and Optimization of Computed Tomography Protocols to Achieve Low-Dose

    PubMed Central

    Chin, Cynthia; Cody, Dianna D.; Gupta, Rajiv; Hess, Christopher P.; Kalra, Mannudeep K.; Kofler, James M.; Krishnam, Mayil S.; Einstein, Andrew J.

    2014-01-01

    The increase in radiation exposure due to CT scans has been of growing concern in recent years. CT scanners differ in their capabilities and various indications require unique protocols, but there remains room for standardization and optimization. In this paper we summarize approaches to reduce dose, as discussed in lectures comprising the first session of the 2013 UCSF Virtual Symposium on Radiation Safety in Computed Tomography. The experience of scanning at low dose in different body regions, for both diagnostic and interventional CT procedures, is addressed. An essential primary step is justifying the medical need for each scan. General guiding principles for reducing dose include tailoring a scan to a patient, minimizing scan length, use of tube current modulation and minimizing tube current, minimizing-tube potential, iterative reconstruction, and periodic review of CT studies. Organized efforts for standardization have been spearheaded by professional societies such as the American Association of Physicists in Medicine. Finally, all team members should demonstrate an awareness of the importance of minimizing dose. PMID:24589403

  20. Assessing self-care and social function using a computer adaptive testing version of the pediatric evaluation of disability inventory.

    PubMed

    Coster, Wendy J; Haley, Stephen M; Ni, Pengsheng; Dumas, Helene M; Fragala-Pinkham, Maria A

    2008-04-01

    To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the self-care and social function scales of the Pediatric Evaluation of Disability Inventory compared with the full-length version of these scales. Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children's homes. Children with disabilities (n=469) and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Not applicable. Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length self-care and social function scales; time (in seconds) to complete assessments and respondent ratings of burden. Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (r range, .94-.99). Using computer simulation of retrospective data, discriminant validity, and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared with over 16 minutes to complete the full-length scales. Self-care and social function score estimates from CAT administration are highly comparable with those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time.

  1. A Test of the Validity of Inviscid Wall-Modeled LES

    NASA Astrophysics Data System (ADS)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  2. Computerized Adaptive Assessment of Cognitive Abilities among Disabled Adults.

    ERIC Educational Resources Information Center

    Engdahl, Brian

    This study examined computerized adaptive testing and cognitive ability testing of adults with cognitive disabilities. Adult subjects (N=250) were given computerized tests on language usage and space relations in one of three administration conditions: paper and pencil, fixed length computer adaptive, and variable length computer adaptive.…

  3. Merging taper lengths for short duration lane closure : final report, December 2009.

    DOT National Transportation Integrated Search

    2009-12-01

    The Utility Industry has requested that the Florida Department of Transportation provide for the use of merging taper lengths that are significantly shorter than the lengths computed using the taper length equations published in the MUTCD Section 6C....

  4. Correlation of soft palate length with velum obstruction and severity of obstructive sleep apnea syndrome.

    PubMed

    Lim, Ju-Shin; Lee, Jae Woo; Han, Chun; Kwon, Jang-Woo

    2018-06-01

    Our aim in this study was to analyze whether soft palate length and velum obstruction during sleep are correlated and to determine the effects of related parameters on obstructive sleep apnea syndrome (OSAS) severity. We used computed tomography to measure soft palate length and drug-induced sleep endoscopy (DISE) to evaluate velum obstruction severity. Patients also underwent polysomnography (PSG) for evaluation of OSAS severity. A retrospective cohort of 67 patients with OSAS treated between May 1st, 2013 and July 31st, 2016 was analyzed. Each patient underwent DISE, PSG, and computed tomography. Using DISE, velum obstruction was categorized by the VOTE classification method. Using computed tomography, soft palate length was measured as the length of the posterior nasal spine to the uvula. Correlations of velum obstruction in DISE and PSG parameters (obstructive apnea, hypopnea, apnea hypopnea index (AHI), respiratory effort related arousal (RERA), respiratory disturbance index (RDI), baseline SaO 2 , and minimum SaO 2 ) with soft palate length were also analyzed. Among the 67 patients, the average PNS-U length was 39.90±4.19mm. Length was significantly different by age but not by other demographic characteristics such as sex, past history, or BMI. DISE revealed a statistically significant difference of velum obstruction degree; the cutoff value for PNS-U was 39.47mm. The PSG results, obstructive apnea, AHI, RDI, baseline SaO 2 , and minimum SaO 2 were correlated with PNS-U length, while other results such as hypopnea and RERA showed no correlation. Analysis of soft palate length showed that increased PNS-U length was associated with higher rates of obstructive apnea, AHI, and RDI as assessed by PSG. In contrast, lower baseline SaO 2 and minimum SaO 2 values were seen by PSG; more severe velum obstruction was seen by DISE. We propose that when a soft palate is suspected in OSAS, computed tomography measurement of soft palate length is a valid method for estimating the degree of velum obstruction and the severity of OSAS. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Comprehension of Navigation Directions

    NASA Technical Reports Server (NTRS)

    Healy, Alice F.; Schneider, Vivian I.

    2002-01-01

    Subjects were shown navigation instructions varying in length directing them to move in a space represented by grids on a computer screen. They followed the instructions by clicking on the grids in the locations specified. Some subjects repeated back the instructions before following them, some did not, and others repeated back the instructions in reduced form, including only the critical words. The commands in each message were presented simultaneously for half of the subjects and sequentially for the others. For the longest messages, performance was better on the initial commands and worse on the final commands with simultaneous than with sequential presentation. Instruction repetition depressed performance, but reduced repetition removed this disadvantage. Effects of presentation format were attributed to visual scanning strategies. The advantage for reduced repetition was attributable either to enhanced visual scanning or to reduced output interference. A follow-up study with auditory presentation supported the visual scanning explanation.

  6. The semantic distance task: Quantifying semantic distance with semantic network path length.

    PubMed

    Kenett, Yoed N; Levi, Effi; Anaki, David; Faust, Miriam

    2017-09-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We propose a novel approach to computing semantic distance, based on network science methodology. Path length in a semantic network represents the amount of steps needed to traverse from 1 word in the network to the other. We examine whether path length can be used as a measure of semantic distance, by investigating how path length affect performance in a semantic relatedness judgment task and recall from memory. Our results show a differential effect on performance: Up to 4 steps separating between word-pairs, participants exhibit an increase in reaction time (RT) and decrease in the percentage of word-pairs judged as related. From 4 steps onward, participants exhibit a significant decrease in RT and the word-pairs are dominantly judged as unrelated. Furthermore, we show that as path length between word-pairs increases, success in free- and cued-recall decreases. Finally, we demonstrate how our measure outperforms computational methods measuring semantic distance (LSA and positive pointwise mutual information) in predicting participants RT and subjective judgments of semantic strength. Thus, we provide a computational alternative to computing semantic distance. Furthermore, this approach addresses key issues in cognitive theory, namely the breadth of the spreading activation process and the effect of semantic distance on memory retrieval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  8. Cocrystallization studies of full-length recombinant butyrylcholinesterase (BChE) with cocaine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asojo, Oluwatoyin Ajibola; Asojo, Oluyomi Adebola; Ngamelue, Michelle N.

    Human butyrylcholinesterase (BChE; EC 3.1.1.8) is a 340 kDa tetrameric glycoprotein that is present in human serum at about 5 mg l{sup -1} and has well documented therapeutic effects on cocaine toxicity. BChE holds promise as a therapeutic that reduces and finally eliminates the rewarding effects of cocaine, thus weaning an addict from the drug. There have been extensive computational studies of cocaine hydrolysis by BChE. Since there are no reported structures of BChE with cocaine or any of the hydrolysis products, full-length monomeric recombinant wild-type BChE was cocrystallized with cocaine. The refined 3 {angstrom} resolution structure appears to retainmore » the hydrolysis product benzoic acid in sufficient proximity to form a hydrogen bond to the active-site Ser198.« less

  9. Wavelet phase extracting demodulation algorithm based on scale factor for optical fiber Fabry-Perot sensing.

    PubMed

    Zhang, Baolin; Tong, Xinglin; Hu, Pan; Guo, Qian; Zheng, Zhiyuan; Zhou, Chaoran

    2016-12-26

    Optical fiber Fabry-Perot (F-P) sensors have been used in various on-line monitoring of physical parameters such as acoustics, temperature and pressure. In this paper, a wavelet phase extracting demodulation algorithm for optical fiber F-P sensing is first proposed. In application of this demodulation algorithm, search range of scale factor is determined by estimated cavity length which is obtained by fast Fourier transform (FFT) algorithm. Phase information of each point on the optical interference spectrum can be directly extracted through the continuous complex wavelet transform without de-noising. And the cavity length of the optical fiber F-P sensor is calculated by the slope of fitting curve of the phase. Theorical analysis and experiment results show that this algorithm can greatly reduce the amount of computation and improve demodulation speed and accuracy.

  10. Ventricular flow dynamics with varying LVAD inflow cannula lengths: In-silico evaluation in a multiscale model.

    PubMed

    Liao, Sam; Neidlin, Michael; Li, Zhiyong; Simpson, Benjamin; Gregory, Shaun D

    2018-04-27

    Left ventricular assist devices are associated with thromboembolic events, which are potentially caused by altered intraventricular flow. Due to patient variability, differences in apical wall thickness affects cannula insertion lengths, potentially promoting unfavourable intraventricular flow patterns which are thought to be correlated to the risk of thrombosis. This study aimed to present a 3D multiscale computational fluid dynamic model of the left ventricle (LV) developed using a commercial software, Ansys, and evaluate the risk of thrombosis with varying inflow cannula insertion lengths in a severely dilated LV. Based on a HeartWare HVAD inflow cannula, insertion lengths of 5, 19, 24 and 50 mm represented cases of apical hypertrophy, typical ranges of apical thicknesses and an experimental length, respectively. The risk of thrombosis was evaluated based on blood washout, residence time, instantaneous blood stagnation and a pulsatility index. By introducing fresh blood to displace pre-existing blood in the LV, after 5 cardiac cycles, 46.7%, 45.7%, 45.1% and 41.8% of pre-existing blood remained for insertion lengths of 5, 19, 24 and 50 mm, respectively. Compared to the 50 mm insertion, blood residence time was at least 9%, 7% and 6% higher with the 5, 19 and 24 mm insertion lengths, respectively. No instantaneous stagnation at the apex was observed directly after the E-wave. Pulsatility indices adjacent to the cannula increased with shorter insertion lengths. For the specific scenario studied, a longer insertion length, relative to LV size, may be advantageous to minimise thrombosis by increasing LV washout and reducing blood residence time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Effects of dry period length on production, cash flows and greenhouse gas emissions of the dairy herd: A dynamic stochastic simulation model

    PubMed Central

    van Middelaar, Corina E.; Mostert, Pim F.; van Knegsel, Ariëtte T. M.; Kemp, Bas; de Boer, Imke J. M.; Hogeveen, Henk

    2017-01-01

    Shortening or omitting the dry period of dairy cows improves metabolic health in early lactation and reduces management transitions for dairy cows. The success of implementation of these strategies depends on their impact on milk yield and farm profitability. Insight in these impacts is valuable for informed decision-making by farmers. The aim of this study was to investigate how shortening or omitting the dry period of dairy cows affects production and cash flows at the herd level, and greenhouse gas emissions per unit of milk, using a dynamic stochastic simulation model. The effects of dry period length on milk yield and calving interval assumed in this model were derived from actual performance of commercial dairy cows over multiple lactations. The model simulated lactations, and calving and culling events of individual cows for herds of 100 cows. Herds were simulated for 5 years with a dry period of 56 (conventional), 28 or 0 days (n = 50 herds each). Partial cash flows were computed from revenues from sold milk, calves, and culled cows, and costs from feed and rearing youngstock. Greenhouse gas emissions were computed using a life cycle approach. A dry period of 28 days reduced milk production of the herd by 3.0% in years 2 through 5, compared with a dry period of 56 days. A dry period of 0 days reduced milk production by 3.5% in years 3 through 5, after a dip in milk production of 6.9% in year 2. On average, dry periods of 28 and 0 days reduced partial cash flows by €1,249 and €1,632 per herd per year, and increased greenhouse gas emissions by 0.7% and 0.5%, respectively. Considering the potential for enhancing cow welfare, these negative impacts of shortening or omitting the dry period seem justifiable, and they might even be offset by improved health. PMID:29077739

  12. Effects of dry period length on production, cash flows and greenhouse gas emissions of the dairy herd: A dynamic stochastic simulation model.

    PubMed

    Kok, Akke; van Middelaar, Corina E; Mostert, Pim F; van Knegsel, Ariëtte T M; Kemp, Bas; de Boer, Imke J M; Hogeveen, Henk

    2017-01-01

    Shortening or omitting the dry period of dairy cows improves metabolic health in early lactation and reduces management transitions for dairy cows. The success of implementation of these strategies depends on their impact on milk yield and farm profitability. Insight in these impacts is valuable for informed decision-making by farmers. The aim of this study was to investigate how shortening or omitting the dry period of dairy cows affects production and cash flows at the herd level, and greenhouse gas emissions per unit of milk, using a dynamic stochastic simulation model. The effects of dry period length on milk yield and calving interval assumed in this model were derived from actual performance of commercial dairy cows over multiple lactations. The model simulated lactations, and calving and culling events of individual cows for herds of 100 cows. Herds were simulated for 5 years with a dry period of 56 (conventional), 28 or 0 days (n = 50 herds each). Partial cash flows were computed from revenues from sold milk, calves, and culled cows, and costs from feed and rearing youngstock. Greenhouse gas emissions were computed using a life cycle approach. A dry period of 28 days reduced milk production of the herd by 3.0% in years 2 through 5, compared with a dry period of 56 days. A dry period of 0 days reduced milk production by 3.5% in years 3 through 5, after a dip in milk production of 6.9% in year 2. On average, dry periods of 28 and 0 days reduced partial cash flows by €1,249 and €1,632 per herd per year, and increased greenhouse gas emissions by 0.7% and 0.5%, respectively. Considering the potential for enhancing cow welfare, these negative impacts of shortening or omitting the dry period seem justifiable, and they might even be offset by improved health.

  13. Dayton Aircraft Cabin Fire Model, Version 3, Volume I. Physical Description.

    DTIC Science & Technology

    1982-06-01

    contact to any surface directly above a burning element, provided that the current flame length makes contact possible. For fires originating on the...no extension of the flames horizontally beneath the surface is considered. The equation for computing the flame length is presented in Section 5. For...high as 0.3. The values chosen for DACFIR3 are 0.15 for Ec and 0.10 for E P. The Steward model is also used to compute flame length , hf, for the fire

  14. Improved Modeling of Surface Layer Parameters in a AGCM Using Refined Vertical Resolution in the Surface Layer

    NASA Astrophysics Data System (ADS)

    Shin, H. H.; Zhao, M.; Ming, Y.; Chen, X.; Lin, S. J.

    2017-12-01

    Surface layer (SL) parameters in atmospheric models - such as 2-m air temperature (T2), 10-m wind speed (U10), and surface turbulent fluxes - are computed by applying the Monin-Obukhov Similarity Theory (MOST) to the lowest model level height (LMH) in the models. The underlying assumption is that LMH is within surface layer height (SLH), but most AGCMs hardly meet the condition in stable boundary layers (SBLs) over land. To assess the errors in modeled SL parameters caused by this, offline computations of the MOST are performed with different LMHs from 1 to 100 m, for an idealized SBL case with prescribed surface parameters (surface temperature, roughness length and Obukhov length), and vertical profiles of temperature and winds. The results show that when LMH is higher than SLH, T2 and U10 are underestimated by O(1 K) and O(1 m/s), respectively, and the biases increase as LMH increases. Based on this, the refined vertical resolution with an additional layer in the SL is applied to the GFDL AGCM, and it reduces the systematic cold biases in T2 and the systematic underestimation of U10.

  15. Path optimization with limited sensing ability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Sung Ha, E-mail: kang@math.gatech.edu; Kim, Seong Jun, E-mail: skim396@math.gatech.edu; Zhou, Haomin, E-mail: hmzhou@math.gatech.edu

    2015-10-15

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducingmore » its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.« less

  16. Robust mode space approach for atomistic modeling of realistically large nanowire transistors

    NASA Astrophysics Data System (ADS)

    Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard

    2018-01-01

    Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.

  17. Requirements for Kalman filtering on the GE-701 whole word computer

    NASA Technical Reports Server (NTRS)

    Pines, S.; Schmidt, S. F.

    1978-01-01

    The results of a study to determine scaling, storage, and word length requirements for programming the Kalman filter on the GE-701 Whole Word Computer are reported. Simulation tests are presented which indicate that the Kalman filter, using a square root formulation with process noise added, utilizing MLS, radar altimeters, and airspeed as navigation aids, may be programmed for the GE-701 computer to successfully navigate and control the Boeing B737-100 during landing approach, landing rollout, and turnoff. The report contains flow charts, equations, computer storage, scaling, and word length recommendations for the Kalman filter on the GE-701 Whole Word computer.

  18. CFD Extraction of Heat Transfer Coefficient in Cryogenic Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. This study uses first-principles based CFD methodology to compute heat transfer from the tank wall to the cryogenic fluids and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between the tank wall and cryogenic propellant, and that between the tank wall and ullage gas were then simulated. The results showed that the commonly used heat transfer correlations for either vertical or horizontal plate over-predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.

  19. Assessing self-care and social function using a computer adaptive testing version of the Pediatric Evaluation of Disability Inventory Accepted for Publication, Archives of Physical Medicine and Rehabilitation

    PubMed Central

    Coster, Wendy J.; Haley, Stephen M.; Ni, Pengsheng; Dumas, Helene M.; Fragala-Pinkham, Maria A.

    2009-01-01

    Objective To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the Self-Care and Social Function scales of the Pediatric Evaluation of Disability Inventory (PEDI) compared to the full-length version of these scales. Design Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Settings Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children’s homes. Participants Four hundred sixty-nine children with disabilities and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Interventions Not applicable. Main Outcome Measures Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length Self-Care and Social Function scales; time (in seconds) to complete assessments and respondent ratings of burden. Results Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (all r’s between .94 and .99). Using computer simulation of retrospective data, discriminant validity and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared to over 16 minutes to complete the full-length scales. Conclusions Self-care and Social Function score estimates from CAT administration are highly comparable to those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time. PMID:18373991

  20. Faster computation of exact RNA shape probabilities.

    PubMed

    Janssen, Stefan; Giegerich, Robert

    2010-03-01

    Abstract shape analysis allows efficient computation of a representative sample of low-energy foldings of an RNA molecule. More comprehensive information is obtained by computing shape probabilities, accumulating the Boltzmann probabilities of all structures within each abstract shape. Such information is superior to free energies because it is independent of sequence length and base composition. However, up to this point, computation of shape probabilities evaluates all shapes simultaneously and comes with a computation cost which is exponential in the length of the sequence. We device an approach called RapidShapes that computes the shapes above a specified probability threshold T by generating a list of promising shapes and constructing specialized folding programs for each shape to compute its share of Boltzmann probability. This aims at a heuristic improvement of runtime, while still computing exact probability values. Evaluating this approach and several substrategies, we find that only a small proportion of shapes have to be actually computed. For an RNA sequence of length 400, this leads, depending on the threshold, to a 10-138 fold speed-up compared with the previous complete method. Thus, probabilistic shape analysis has become feasible in medium-scale applications, such as the screening of RNA transcripts in a bacterial genome. RapidShapes is available via http://bibiserv.cebitec.uni-bielefeld.de/rnashapes

  1. An X-Ray computed tomography/positron emission tomography system designed specifically for breast imaging.

    PubMed

    Boone, John M; Yang, Kai; Burkett, George W; Packard, Nathan J; Huang, Shih-ying; Bowen, Spencer; Badawi, Ramsey D; Lindfors, Karen K

    2010-02-01

    Mammography has served the population of women who are at-risk for breast cancer well over the past 30 years. While mammography has undergone a number of changes as digital detector technology has advanced, other modalities such as computed tomography have experienced technological sophistication over this same time frame as well. The advent of large field of view flat panel detector systems enable the development of breast CT and several other niche CT applications, which rely on cone beam geometry. The breast, it turns out, is well suited to cone beam CT imaging because the lack of bones reduces artifacts, and the natural tapering of the breast anteriorly reduces the x-ray path lengths through the breast at large cone angle, reducing cone beam artifacts as well. We are in the process of designing a third prototype system which will enable the use of breast CT for image guided interventional procedures. This system will have several copies fabricated so that several breast CT scanners can be used in a multi-institutional clinical trial to better understand the role that this technology can bring to breast imaging.

  2. SAWdoubler: A program for counting self-avoiding walks

    NASA Astrophysics Data System (ADS)

    Schram, Raoul D.; Barkema, Gerard T.; Bisseling, Rob H.

    2013-03-01

    This article presents SAWdoubler, a package for counting the total number ZN of self-avoiding walks (SAWs) on a regular lattice by the length-doubling method, of which the basic concept has been published previously by us. We discuss an algorithm for the creation of all SAWs of length N, efficient storage of these SAWs in a tree data structure, and an algorithm for the computation of correction terms to the count Z2N for SAWs of double length, removing all combinations of two intersecting single-length SAWs. We present an efficient numbering of the lattice sites that enables exploitation of symmetry and leads to a smaller tree data structure; this numbering is by increasing Euclidean distance from the origin of the lattice. Furthermore, we show how the computation can be parallelised by distributing the iterations of the main loop of the algorithm over the cores of a multicore architecture. Experimental results on the 3D cubic lattice demonstrate that Z28 can be computed on a dual-core PC in only 1 h and 40 min, with a speedup of 1.56 compared to the single-core computation and with a gain by using symmetry of a factor of 26. We present results for memory use and show how the computation is made to fit in 4 GB RAM. It is easy to extend the SAWdoubler software to other lattices; it is publicly available under the GNU LGPL license. Catalogue identifier: AEOB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public Licence No. of lines in distributed program, including test data, etc.: 2101 No. of bytes in distributed program, including test data, etc.: 19816 Distribution format: tar.gz Programming language: C. Computer: Any computer with a UNIX-like operating system and a C compiler. For large problems, use is made of specific 128-bit integer arithmetic provided by the gcc compiler. Operating system: Any UNIX-like system; developed under Linux and Mac OS 10. Has the code been vectorised or parallelised?: Yes. A parallel version of the code is available in the “Extras” directory of the distribution file. RAM: Problem dependent (2 GB for counting SAWs of length 28 on the 3D cubic lattice) Classification: 16.11. Nature of problem: Computing the number of self-avoiding walks of a given length on a given lattice. Solution method: Length-doubling. Restrictions: The length of the walk must be even. Lattice is 3D simple cubic. Additional comments: The lattice can be replaced by other lattices, such as BCC, FCC, or a 2D square lattice. Running time: Problem dependent (2.5 h using one processor core for length 28 on the 3D cubic lattice).

  3. A superlinear iteration method for calculation of finite length journal bearing's static equilibrium position.

    PubMed

    Zhou, Wenjie; Wei, Xuesong; Wang, Leqin; Wu, Guangkuan

    2017-05-01

    Solving the static equilibrium position is one of the most important parts of dynamic coefficients calculation and further coupled calculation of rotor system. The main contribution of this study is testing the superlinear iteration convergence method-twofold secant method, for the determination of the static equilibrium position of journal bearing with finite length. Essentially, the Reynolds equation for stable motion is solved by the finite difference method and the inner pressure is obtained by the successive over-relaxation iterative method reinforced by the compound Simpson quadrature formula. The accuracy and efficiency of the twofold secant method are higher in comparison with the secant method and dichotomy. The total number of iterative steps required for the twofold secant method are about one-third of the secant method and less than one-eighth of dichotomy for the same equilibrium position. The calculations for equilibrium position and pressure distribution for different bearing length, clearance and rotating speed were done. In the results, the eccentricity presents linear inverse proportional relationship to the attitude angle. The influence of the bearing length, clearance and bearing radius on the load-carrying capacity was also investigated. The results illustrate that larger bearing length, larger radius and smaller clearance are good for the load-carrying capacity of journal bearing. The application of the twofold secant method can greatly reduce the computational time for calculation of the dynamic coefficients and dynamic characteristics of rotor-bearing system with a journal bearing of finite length.

  4. A superlinear iteration method for calculation of finite length journal bearing's static equilibrium position

    PubMed Central

    Zhou, Wenjie; Wei, Xuesong; Wang, Leqin

    2017-01-01

    Solving the static equilibrium position is one of the most important parts of dynamic coefficients calculation and further coupled calculation of rotor system. The main contribution of this study is testing the superlinear iteration convergence method—twofold secant method, for the determination of the static equilibrium position of journal bearing with finite length. Essentially, the Reynolds equation for stable motion is solved by the finite difference method and the inner pressure is obtained by the successive over-relaxation iterative method reinforced by the compound Simpson quadrature formula. The accuracy and efficiency of the twofold secant method are higher in comparison with the secant method and dichotomy. The total number of iterative steps required for the twofold secant method are about one-third of the secant method and less than one-eighth of dichotomy for the same equilibrium position. The calculations for equilibrium position and pressure distribution for different bearing length, clearance and rotating speed were done. In the results, the eccentricity presents linear inverse proportional relationship to the attitude angle. The influence of the bearing length, clearance and bearing radius on the load-carrying capacity was also investigated. The results illustrate that larger bearing length, larger radius and smaller clearance are good for the load-carrying capacity of journal bearing. The application of the twofold secant method can greatly reduce the computational time for calculation of the dynamic coefficients and dynamic characteristics of rotor-bearing system with a journal bearing of finite length. PMID:28572997

  5. Effects of footwear and stride length on metatarsal strains and failure in running.

    PubMed

    Firminger, Colin R; Fung, Anita; Loundagin, Lindsay L; Edwards, W Brent

    2017-11-01

    The metatarsal bones of the foot are particularly susceptible to stress fracture owing to the high strains they experience during the stance phase of running. Shoe cushioning and stride length reduction represent two potential interventions to decrease metatarsal strain and thus stress fracture risk. Fourteen male recreational runners ran overground at a 5-km pace while motion capture and plantar pressure data were collected during four experimental conditions: traditional shoe at preferred and 90% preferred stride length, and minimalist shoe at preferred and 90% preferred stride length. Combined musculoskeletal - finite element modeling based on motion analysis and computed tomography data were used to quantify metatarsal strains and the probability of failure was determined using stress-life predictions. No significant interactions between footwear and stride length were observed. Running in minimalist shoes increased strains for all metatarsals by 28.7% (SD 6.4%; p<0.001) and probability of failure for metatarsals 2-4 by 17.3% (SD 14.3%; p≤0.005). Running at 90% preferred stride length decreased strains for metatarsal 4 by 4.2% (SD 2.0%; p≤0.007), and no differences in probability of failure were observed. Significant increases in metatarsal strains and the probability of failure were observed for recreational runners acutely transitioning to minimalist shoes. Running with a 10% reduction in stride length did not appear to be a beneficial technique for reducing the risk of metatarsal stress fracture, however the increased number of loading cycles for a given distance was not detrimental either. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A rational approach to the use of Prandtl's mixing length model in free turbulent shear flow calculations

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Bushnell, D. M.

    1973-01-01

    Prandtl's basic mixing length model was used to compute 22 test cases on free turbulent shear flows. The calculations employed appropriate algebraic length scale equations and single values of mixing length constant for planar and axisymmetric flows, respectively. Good agreement with data was obtained except for flows, such as supersonic free shear layers, where large sustained sensitivity changes occur. The inability to predict the more gradual mixing in these flows is tentatively ascribed to the presence of a significant turbulence-induced transverse static pressure gradient which is neglected in conventional solution procedures. Some type of an equation for length scale development was found to be necessary for successful computation of highly nonsimilar flow regions such as jet or wake development from thick wall flows.

  7. On the correlation between bond-length change and vibrational frequency shift in halogen-bonded complexes

    NASA Astrophysics Data System (ADS)

    Wang, Weizhou; Zhang, Yu; Ji, Baoming; Tian, Anmin

    2011-06-01

    The C-Hal (Hal = Cl, Br, or I) bond-length change and the corresponding vibrational frequency shift of the C-Hal stretch upon the C-Hal ⋯Y (Y is the electron donor) halogen bond formation have been determined by using density functional theory computations. Plots of the C-Hal bond-length change versus the corresponding vibrational frequency shift of the C-Hal stretch all give straight lines. The coefficients of determination range from 0.94366 to 0.99219, showing that the correlation between the C-Hal bond-length change and the corresponding frequency shift is very good in the halogen-bonded complexes. The possible effects of vibrational coupling, computational method, and anharmonicity on the bond-length change-frequency shift correlation are discussed in detail.

  8. Computing the Length of the Shortest Telomere in the Nucleus

    NASA Astrophysics Data System (ADS)

    Dao Duc, K.; Holcman, D.

    2013-11-01

    The telomere length can either be shortened or elongated by an enzyme called telomerase after each cell division. Interestingly, the shortest telomere is involved in controlling the ability of a cell to divide. Yet, its dynamics remains elusive. We present here a stochastic approach where we model this dynamics using a Markov jump process. We solve the forward Fokker-Planck equation to obtain the steady state distribution and the statistical moments of telomere lengths. We focus specifically on the shortest one and we estimate its length difference with the second shortest telomere. After extracting key parameters such as elongation and shortening dynamics from experimental data, we compute the length of telomeres in yeast and obtain as a possible prediction the minimum concentration of telomerase required to ensure a proper cell division.

  9. Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.

    PubMed

    Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei

    2016-01-22

    Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.

  10. Using simulation in out-patient queues: a case study.

    PubMed

    Huarng, F; Lee, M H

    1996-01-01

    Overwork and overcrowding in some periods was an important issue for the out-patient department of a local hospital in Chia-Yi in Taiwan. The hospital administrators wanted to manage the patient flow effectively. Describes a study which focused on the utilization of doctors and staff in the out-patient department, the time spent in the hospital by an out-patient, and the length of the out-patient queue. Explains how a computer simulation model was developed to study how changes in the appointment system, staffing policies and service units would affect the observed bottleneck. The results show that the waiting time was greatly reduced and the workload of the doctor was also reduced to a reasonable rate in the overwork and overcrowding periods.

  11. Recirculation zone length in renal artery is affected by flow spirality and renal-to-aorta flow ratio.

    PubMed

    Javadzadegan, Ashkan; Fulker, David; Barber, Tracie

    2017-07-01

    Haemodynamic perturbations such as flow recirculation zones play a key role in progression and development of renal artery stenosis, which typically originate at the aorta-renal bifurcation. The spiral nature of aortic blood flow, division of aortic blood flow in renal artery as well as the exercise conditions have been shown to alter the haemodynamics in both positive and negative ways. This study focuses on the combinative effects of spiral component of blood flow, renal-to-aorta flow ratio and the exercise conditions on the size and distribution of recirculation zones in renal branches using computational fluid dynamics technique. Our findings show that the recirculation length was longest when the renal-to-aorta flow ratio was smallest. Spiral flow and exercise conditions were found to be effective in reducing the recirculation length in particular in small renal-to-aorta flow ratios. These results support the hypothesis that in renal arteries with small flow ratios where a stenosis is already developed an artificially induced spiral flow within the aorta may decelerate the progression of stenosis and thereby help preserve kidney function.

  12. On avoided words, absent words, and their application to biological sequence analysis.

    PubMed

    Almirantis, Yannis; Charalampopoulos, Panagiotis; Gao, Jia; Iliopoulos, Costas S; Mohamed, Manal; Pissis, Solon P; Polychronopoulos, Dimitris

    2017-01-01

    The deviation of the observed frequency of a word w from its expected frequency in a given sequence x is used to determine whether or not the word is avoided . This concept is particularly useful in DNA linguistic analysis. The value of the deviation of w , denoted by [Formula: see text], effectively characterises the extent of a word by its edge contrast in the context in which it occurs. A word w of length [Formula: see text] is a [Formula: see text]-avoided word in x if [Formula: see text], for a given threshold [Formula: see text]. Notice that such a word may be completely absent from x . Hence, computing all such words naïvely can be a very time-consuming procedure, in particular for large k . In this article, we propose an [Formula: see text]-time and [Formula: see text]-space algorithm to compute all [Formula: see text]-avoided words of length k in a given sequence of length n over a fixed-sized alphabet. We also present a time-optimal [Formula: see text]-time algorithm to compute all [Formula: see text]-avoided words (of any length) in a sequence of length n over an integer alphabet of size [Formula: see text]. In addition, we provide a tight asymptotic upper bound for the number of [Formula: see text]-avoided words over an integer alphabet and the expected length of the longest one. We make available an implementation of our algorithm. Experimental results, using both real and synthetic data, show the efficiency and applicability of our implementation in biological sequence analysis. The systematic search for avoided words is particularly useful for biological sequence analysis. We present a linear-time and linear-space algorithm for the computation of avoided words of length k in a given sequence x . We suggest a modification to this algorithm so that it computes all avoided words of x , irrespective of their length, within the same time complexity. We also present combinatorial results with regards to avoided words and absent words.

  13. Simulation study of entropy production in the one-dimensional Vlasov system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie

    2016-07-15

    The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.

  14. Effects of plaque lengths on stent surface roughness.

    PubMed

    Syaifudin, Achmad; Takeda, Ryo; Sasaki, Katsuhiko

    2015-01-01

    The physical properties of the stent surface influence the effectiveness of vascular disease treatment after stent deployment. During the expanding process, the stent acquires high-level deformation that could alter either its microstructure or the magnitude of surface roughness. This paper constructed a finite element simulation to observe the changes in surface roughness during the stenting process. Structural transient dynamic analysis was performed using ANSYS, to identify the deformation after the stent is placed in a blood vessel. Two types of bare metal stents are studied: a Palmaz type and a Sinusoidal type. The relationship between plaque length and the changes in surface roughness was investigated by utilizing three different length of plaque; plaque length longer than the stent, shorter than the stent and the same length as the stent. In order to reduce computational time, 3D cyclical and translational symmetry was implemented into the FE model. The material models used was defined as a multilinear isotropic for stent and hyperelastic for the balloon, plaque and vessel wall. The correlation between the plastic deformation and the changes in surface roughness was obtained by intermittent pure tensile test using specimen whose chemical composition was similar to that of actual stent material. As the plastic strain is achieved from FE simulation, the surface roughness can be assessed thoroughly. The study found that the plaque size relative to stent length significantly influenced the critical changes in surface roughness. It was found that the length of stent which is equal to the plaque length was preferable due to the fact that it generated only moderate change in surface roughness. This effect was less influential to the Sinusoidal stent.

  15. Using Excel To Study The Relation Between Protein Dihedral Angle Omega And Backbone Length

    NASA Astrophysics Data System (ADS)

    Shew, Christopher; Evans, Samari; Tao, Xiuping

    How to involve the uninitiated undergraduate students in computational biophysics research? We made use of Microsoft Excel to carry out calculations of bond lengths, bond angles and dihedral angles of proteins. Specifically, we studied protein backbone dihedral angle omega by examining how its distribution varies with the length of the backbone length. It turns out Excel is a respectable tool for this task. An ordinary current-day desktop or laptop can handle the calculations for midsized proteins in just seconds. Care has to be taken to enter the formulas for the spreadsheet column after column to minimize the computing load. Supported in part by NSF Grant #1238795.

  16. Correlation of the bond-length change and vibrational frequency shift in model hydrogen-bonded complexes of pyrrole

    NASA Astrophysics Data System (ADS)

    McDowell, Sean A. C.

    2017-04-01

    An MP2 computational study of model hydrogen-bonded pyrrole⋯YZ (YZ = NH3, NCH, BF, CO, N2, OC, FB) complexes was undertaken in order to examine the variation of the Nsbnd H bond length change and its associated vibrational frequency shift. The chemical hardness of Y, as well as the YZ dipole moment, were found to be important parameters in modifying the bond length change/frequency shift. The basis set effect on the computed properties was also assessed. A perturbative model, which accurately reproduced the ab initio Nsbnd H bond length changes and frequency shifts, was useful in rationalizing the observed trends.

  17. 38 CFR 17.363 - Length of stay.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Length of stay. 17.363 Section 17.363 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Grants to the Republic of the Philippines § 17.363 Length of stay. In computing the length of stay for which...

  18. 38 CFR 17.363 - Length of stay.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Length of stay. 17.363 Section 17.363 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Grants to the Republic of the Philippines § 17.363 Length of stay. In computing the length of stay for which...

  19. 38 CFR 17.363 - Length of stay.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Length of stay. 17.363 Section 17.363 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Grants to the Republic of the Philippines § 17.363 Length of stay. In computing the length of stay for which...

  20. 38 CFR 17.363 - Length of stay.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Length of stay. 17.363 Section 17.363 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Grants to the Republic of the Philippines § 17.363 Length of stay. In computing the length of stay for which...

  1. 38 CFR 17.363 - Length of stay.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Length of stay. 17.363 Section 17.363 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Grants to the Republic of the Philippines § 17.363 Length of stay. In computing the length of stay for which...

  2. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  3. Fast and accurate read-out of interferometric optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Bartholsen, Ingebrigt; Hjelme, Dag R.

    2016-03-01

    We present results from an evaluation of phase and frequency estimation algorithms for read-out instrumentation of interferometric sensors. Tests on interrogating a micro Fabry-Perot sensor made of semi-spherical stimuli-responsive hydrogel immobilized on a single mode fiber end face, shows that an iterative quadrature demodulation technique (IQDT) implemented on a 32-bit microcontroller unit can achieve an absolute length accuracy of ±50 nm and length change accuracy of ±3 nm using an 80 nm SLED source and a grating spectrometer for interrogation. The mean absolute error for the frequency estimator is a factor 3 larger than the theoretical lower bound for a maximum likelihood estimator. The corresponding factor for the phase estimator is 1.3. The computation time for the IQDT algorithm is reduced by a factor 1000 compared to the full QDT for the same accuracy requirement.

  4. A Fully GPU-Based Ray-Driven Backprojector via a Ray-Culling Scheme with Voxel-Level Parallelization for Cone-Beam CT Reconstruction.

    PubMed

    Park, Hyeong-Gyu; Shin, Yeong-Gil; Lee, Ho

    2015-12-01

    A ray-driven backprojector is based on ray-tracing, which computes the length of the intersection between the ray paths and each voxel to be reconstructed. To reduce the computational burden caused by these exhaustive intersection tests, we propose a fully graphics processing unit (GPU)-based ray-driven backprojector in conjunction with a ray-culling scheme that enables straightforward parallelization without compromising the high computing performance of a GPU. The purpose of the ray-culling scheme is to reduce the number of ray-voxel intersection tests by excluding rays irrelevant to a specific voxel computation. This rejection step is based on an axis-aligned bounding box (AABB) enclosing a region of voxel projection, where eight vertices of each voxel are projected onto the detector plane. The range of the rectangular-shaped AABB is determined by min/max operations on the coordinates in the region. Using the indices of pixels inside the AABB, the rays passing through the voxel can be identified and the voxel is weighted as the length of intersection between the voxel and the ray. This procedure makes it possible to reflect voxel-level parallelization, allowing an independent calculation at each voxel, which is feasible for a GPU implementation. To eliminate redundant calculations during ray-culling, a shared-memory optimization is applied to exploit the GPU memory hierarchy. In experimental results using real measurement data with phantoms, the proposed GPU-based ray-culling scheme reconstructed a volume of resolution 28032803176 in 77 seconds from 680 projections of resolution 10243768 , which is 26 times and 7.5 times faster than standard CPU-based and GPU-based ray-driven backprojectors, respectively. Qualitative and quantitative analyses showed that the ray-driven backprojector provides high-quality reconstruction images when compared with those generated by the Feldkamp-Davis-Kress algorithm using a pixel-driven backprojector, with an average of 2.5 times higher contrast-to-noise ratio, 1.04 times higher universal quality index, and 1.39 times higher normalized mutual information. © The Author(s) 2014.

  5. A Novel Domain Assembly Routine for Creating Full-Length Models of Membrane Proteins from Known Domain Structures.

    PubMed

    Koehler Leman, Julia; Bonneau, Richard

    2018-04-03

    Membrane proteins composed of soluble and membrane domains are often studied one domain at a time. However, to understand the biological function of entire protein systems and their interactions with each other and drugs, knowledge of full-length structures or models is required. Although few computational methods exist that could potentially be used to model full-length constructs of membrane proteins, none of these methods are perfectly suited for the problem at hand. Existing methods require an interface or knowledge of the relative orientations of the domains or are not designed for domain assembly, and none of them are developed for membrane proteins. Here we describe the first domain assembly protocol specifically designed for membrane proteins that assembles intra- and extracellular soluble domains and the transmembrane domain into models of the full-length membrane protein. Our protocol does not require an interface between the domains and samples possible domain orientations based on backbone dihedrals in the flexible linker regions, created via fragment insertion, while keeping the transmembrane domain fixed in the membrane. For five examples tested, our method mp_domain_assembly, implemented in RosettaMP, samples domain orientations close to the known structure and is best used in conjunction with experimental data to reduce the conformational search space.

  6. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-09-14

    Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less

  7. Computational Evaluation of Cochlear Implant Surgery Outcomes Accounting for Uncertainty and Parameter Variability.

    PubMed

    Mangado, Nerea; Pons-Prats, Jordi; Coma, Martí; Mistrík, Pavel; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Á

    2018-01-01

    Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process.

  8. Evaluation of Kidney Stones with Reduced-Radiation Dose CT: Progress from 2011-2012 to 2015-2016-Not There Yet.

    PubMed

    Weisenthal, Karrin; Karthik, Priyadarshini; Shaw, Melissa; Sengupta, Debapriya; Bhargavan-Chatfield, Mythreyi; Burleson, Judy; Mustafa, Adel; Kalra, Mannudeep; Moore, Christopher

    2018-02-01

    Purpose To determine if the use of reduced-dose computed tomography (CT) for evaluation of kidney stones increased in 2015-2016 compared with that in 2011-2012, to determine variability in radiation exposure according to facility for this indication, and to establish a current average radiation dose for CT evaluation for kidney stones by querying a national dose registry. Materials and Methods This cross-sectional study was exempt from institutional review board approval. Data were obtained from the American College of Radiology dose registry for CT examinations submitted from July 2015 to June 2016. Study descriptors consistent with single-phase unenhanced CT for evaluation of kidney stones and associated RadLex® Playbook identifiers (RPIDs) were retrospectively identified. Facilities actively submitting data on kidney stone-specific CT examinations were included. Dose metrics including volumetric CT dose index, dose-length product, and size-specific dose estimate, when available, were reported, and a random effects model was run to account for clustering of CT examinations at facilities. A z-ratio was calculated to test for a significant difference between the proportion of reduced-radiation dose CT examinations (defined as those with a dose-length product of 200 mGy · cm or less) performed in 2015-2016 and the proportion performed in 2011-2012. Results Three hundred four study descriptors for kidney stone CT corresponding to data from 328 facilities that submitted 105 334 kidney stone CT examinations were identified. Reduced-dose CT examinations accounted for 8040 of 105 334 (7.6%) CT examinations, a 5.6% increase from the 1010 of 49 903 (2%) examinations in 2011-2012 (P < .001). Mean overall dose-length product was 689 mGy · cm (95% confidence interval: 667, 712), decreased from the mean of 746 mGy · cm observed in 2011-2012. Median facility dose-length product varied up to sevenfold, from less than 200 mGy · cm to greater than 1600 mGy · cm. Conclusion Use of reduced-radiation dose CT for evaluation of kidney stones has increased since 2011-2012, but remains low; variability of radiation dose according to facility continues to be wide. National mean CT radiation exposure for evaluation of renal colic during 2015-2016 decreased relative to 2011-2012 values, but remained well above what is reasonably achievable. © RSNA, 2017.

  9. Application of a range of turbulence energy models to the determination of M4 tidal current profiles

    NASA Astrophysics Data System (ADS)

    Xing, Jiuxing; Davies, Alan M.

    1996-04-01

    A fully nonlinear, three-dimensional hydrodynamic model of the Irish Sea, using a range of turbulence energy sub-models, is used to examine the influence of the turbulence closure method upon the vertical variation of the current profile of the fundamental and higher harmonics of the tide in the region. Computed tidal current profiles are compared with previous calculations using a spectral model with eddy viscosity related to the flow field. The model has a sufficiently fine grid to resolve the advection terms, in particular the advection of turbulence and momentum. Calculations show that the advection of turbulence energy does not have a significant influence upon the current profile of either the fundamental or higher harmonic of the tide, although the advection of momentum is important in the region of headlands. The simplification of the advective terms by only including them in their vertically integrated form does not appear to make a significant difference to current profiles, but does reduce the computational effort by a significant amount. Computed current profiles both for the fundamental and the higher harmonic determined with a prognostic equation for turbulence and an algebraic mixing length formula, are as accurate as those determined with a two prognostic equation model (the so called q2- q2l model), provided the mixing length is specified correctly. A simple, flow-dependent eddy viscosity with a parabolic variation of viscosity also performs equally well.

  10. Contributions of muscle imbalance and impaired growth to postural and osseous shoulder deformity following brachial plexus birth palsy: a computational simulation analysis.

    PubMed

    Cheng, Wei; Cornwall, Roger; Crouch, Dustin L; Li, Zhongyu; Saul, Katherine R

    2015-06-01

    Two potential mechanisms leading to postural and osseous shoulder deformity after brachial plexus birth palsy are muscle imbalance between functioning internal rotators and paralyzed external rotators and impaired longitudinal growth of paralyzed muscles. Our goal was to evaluate the combined and isolated effects of these 2 mechanisms on transverse plane shoulder forces using a computational model of C5-6 brachial plexus injury. We modeled a C5-6 injury using a computational musculoskeletal upper limb model. Muscles expected to be denervated by C5-6 injury were classified as affected, with the remaining shoulder muscles classified as unaffected. To model muscle imbalance, affected muscles were given no resting tone whereas unaffected muscles were given resting tone at 30% of maximal activation. To model impaired growth, affected muscles were reduced in length by 30% compared with normal whereas unaffected muscles remained normal in length. Four scenarios were simulated: normal, muscle imbalance only, impaired growth only, and both muscle imbalance and impaired growth. Passive shoulder rotation range of motion and glenohumeral joint reaction forces were evaluated to assess postural and osseous deformity. All impaired scenarios exhibited restricted range of motion and increased and posteriorly directed compressive glenohumeral joint forces. Individually, impaired muscle growth caused worse restriction in range of motion and higher and more posteriorly directed glenohumeral forces than did muscle imbalance. Combined muscle imbalance and impaired growth caused the most restricted joint range of motion and the highest joint reaction force of all scenarios. Both muscle imbalance and impaired longitudinal growth contributed to range of motion and force changes consistent with clinically observed deformity, although the most substantial effects resulted from impaired muscle growth. Simulations suggest that treatment strategies emphasizing treatment of impaired longitudinal growth are warranted for reducing deformity after brachial plexus birth palsy. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  11. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  12. Computer simulations of dendrimer-polyelectrolyte complexes.

    PubMed

    Pandav, Gunja; Ganesan, Venkat

    2014-08-28

    We carry out a systematic analysis of static properties of the clusters formed by complexation between charged dendrimers and linear polyelectrolyte (LPE) chains in a dilute solution under good solvent conditions. We use single chain in mean-field simulations and analyze the structure of the clusters through radial distribution functions of the dendrimer, cluster size, and charge distributions. The effects of LPE length, charge ratio between LPE and dendrimer, the influence of salt concentration, and the dendrimer generation number are examined. Systems with short LPEs showed a reduced propensity for aggregation with dendrimers, leading to formation of smaller clusters. In contrast, larger dendrimers and longer LPEs lead to larger clusters with significant bridging. Increasing salt concentration was seen to reduce aggregation between dendrimers as a result of screening of electrostatic interactions. Generally, maximum complexation was observed in systems with an equal amount of net dendrimer and LPE charges, whereas either excess LPE or dendrimer concentrations resulted in reduced clustering between dendrimers.

  13. RANS study of flow Characteristics Over flight deck of Simplified frigate Ship

    NASA Astrophysics Data System (ADS)

    Shukla, Shrish; Singh, Sidh Nath; Srinivasan, Balaji

    2014-11-01

    The combined operation of a ship and helicopter is ubiquitous in every naval organization. The operation of ship with the landing and takeoff of a helicopter over sea results in very complex flow phenomena due to presence of ship air wakes, strong velocity gradients and widely varying turbulence length scales. This complexity of flow is increased with the addition of helicopter downwash during landing and takeoff. The resultant flow is therefore very complicated and accurate prediction represents a computational challenge. We present Reynolds-averaged-Navier-Stokes (RANS) of turbulent flow over a simple frigate ship to gain insight into the flow phenomena over a flight deck. Flow conditions analysis is carried out numerically over the generic simplified frigate ship. Profiles of mean velocity across longitudinal and transverse plane have been analyzed along the ship. Further, we propose some design modifications in order to reduce pilot load and increase the ship helicopter operation limit (SHOL). Computational results for these modified designs are also presented and their efficacy in reducing the turbulence levels and recirculation zone in the ship air wakes is discussed. Graduate student.

  14. Application of Pinniped Vibrissae to Aeropropulsion

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram; Ameri, Ali; Poinsatte, Philip; Thurman, Douglas; Wroblewski, Adam; Snyder, Christopher

    2015-01-01

    Vibrissae of Phoca Vitulina (Harbor Seal) and Mirounga Angustirostris (Elephant Seal) possessundulations along their length. Harbor Seal Vibrissae were shown to reduce vortex induced vibrations and reduce dragcompared to appropriately scaled cylinders and ellipses. Samples of Harbor Seal vibrissae, Elephant Seal vibrissae andCalifornia Sea Lion vibrissae were collected from the Marine Mammal Center in California. CT scanning, microscopy and3D scanning techniques were utilized to characterize the whiskers. Computational fluid dynamics simulations of thewhiskers were carried out to compare them to an ellipse and a cylinder. Leading edge parameters from the whiskerswere used to create a 3D profile based on a modern power turbine blade. The NASA SW-2 facility was used to performwind tunnel cascade testing on the 'Seal Blades'. Computational Fluid Dynamics simulations were used to studyincidence angles from -37 to +10 degrees on the aerodynamic performance of the Seal Blade. The tests and simulationswere conducted at a Reynolds number of 100,000. The Seal Blades showed consistent performance improvements overthe baseline configuration. It was determined that a fuel burn reduction of approximately 5 could be achieved for a fixedwing aircraft. Noise reduction potential is also explored

  15. Computational Analysis of the Effect of Porosity on Shock Cell Strength at Cruise

    NASA Technical Reports Server (NTRS)

    Massey, Steven J.; Elmiligui, Alaa A.; Pao, S. Paul; Abdol-Hamid, Khaled S.; Hunter, Craig A.

    2006-01-01

    A computational flow field analysis is presented of the effect of core cowl porosity on shock cell strength for a modern separate flow nozzle at cruise conditions. The goal of this study was to identify the primary physical mechanisms by which the application of porosity can reduce shock cell strength and hence the broadband shock associated noise. The flow is simulated by solving the asymptotically steady, compressible, Reynoldsaveraged Navier-Stokes equations on a structured grid using an implicit, up-wind, flux-difference splitting finite volume scheme. The standard two-equation k - epsilon turbulence model with a linear stress representation is used with the addition of a eddy viscosity dependence on total temperature gradient normalized by local turbulence length scale. Specific issues addressed in this study were the optimal area required to weaken a shock impinging on the core cowl surface and the optimal level of porosity and placement of porous areas for reduction of the overall shock cell strength downstream. Two configurations of porosity were found to reduce downstream shock strength by approximately 50%.

  16. Neighbour lists for smoothed particle hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang

    2018-04-01

    The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.

  17. Experimental reduction of intromittent organ length reduces male reproductive success in a bug

    PubMed Central

    Dougherty, Liam R.; Rahman, Imran A.; Burdfield-Steel, Emily R.; Greenway, E. V. (Ginny); Shuker, David M.

    2015-01-01

    It is now clear in many species that male and female genital evolution has been shaped by sexual selection. However, it has historically been difficult to confirm correlations between morphology and fitness, as genital traits are complex and manipulation tends to impair function significantly. In this study, we investigate the functional morphology of the elongate male intromittent organ (or processus) of the seed bug Lygaeus simulans, in two ways. We first use micro-computed tomography (micro-CT) and flash-freezing to reconstruct in high resolution the interaction between the male intromittent organ and the female internal reproductive anatomy during mating. We successfully trace the path of the male processus inside the female reproductive tract. We then confirm that male processus length influences sperm transfer by experimental ablation and show that males with shortened processi have significantly reduced post-copulatory reproductive success. Importantly, male insemination function is not affected by this manipulation per se. We thus present rare, direct experimental evidence that an internal genital trait functions to increase reproductive success and show that, with appropriate staining, micro-CT is an excellent tool for investigating the functional morphology of insect genitalia during copulation. PMID:25972470

  18. Are X-rays the key to integrated computational materials engineering?

    DOE PAGES

    Ice, Gene E.

    2015-11-01

    The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less

  19. Coupling of laser energy into plasma channels

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Giacone, R. E.; Bruhwiler, D. L.; Busby, R.; Cary, J. R.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2007-04-01

    Diffractive spreading of a laser pulse imposes severe limitations on the acceleration length and maximum electron energy in the laser wake field accelerator (LWFA). Optical guiding of a laser pulse via plasma channels can extend the laser-plasma interaction distance over many Rayleigh lengths. Energy efficient coupling of laser pulses into and through plasma channels is very important for optimal LWFA performance. Results from simulation parameter studies on channel guiding using the particle-in-cell (PIC) code VORPAL [C. Nieter and J. R. Cary, J. Comput. Phys. 196, 448 (2004)] are presented and discussed. The effects that density ramp length and the position of the laser pulse focus have on coupling into channels are considered. Moreover, the effect of laser energy leakage out of the channel domain and the effects of tunneling ionization of a neutral gas on the guided laser pulse are also investigated. Power spectral diagnostics were developed and used to separate pump depletion from energy leakage. The results of these simulations show that increasing the density ramp length decreases the efficiency of coupling a laser pulse to a channel and increases the energy loss when the pulse is vacuum focused at the channel entrance. Then, large spot size oscillations result in increased energy leakage. To further analyze the coupling, a differential equation is derived for the laser spot size evolution in the plasma density ramp and channel profiles are simulated. From the numerical solution of this equation, the optimal spot size and location for coupling into a plasma channel with a density ramp are determined. This result is confirmed by the PIC simulations. They show that specifying a vacuum focus location of the pulse in front of the top of the density ramp leads to an actual focus at the top of the ramp due to plasma focusing, resulting in reduced spot size oscillations. In this case, the leakage is significantly reduced and is negligibly affected by ramp length, allowing for efficient use of channels with long ramps.

  20. Experimental investigation of cooling perimeter and disturbance length effect on stability of Nb3Sn cable-in-conduit conductors

    NASA Astrophysics Data System (ADS)

    Armstrong, J. R.

    1992-02-01

    The stability of three coils, with similar parameters besides having differing strand diameters, was investigated experimentally using inductive heaters to input disturbances. One of the coils stability was also tested by doubling the inductive heated disturbance length to 10 cm. By computationally deriving approximate inductive heater input energy at 12 T, stability curves show fair agreement with zero-dimensional and one-dimensional computer predictions. Quench velocity and limiting currents also show good agreement with earlier work. Also, the stability measured on one of the coils below its limiting current by disturbing a 10 cm length of conductor was much less than the same samples stability using a 5 cm disturbance length.

  1. Computer-implemented remote sensing techniques for measuring coastal productivity and nutrient transport systems

    NASA Technical Reports Server (NTRS)

    Butera, M. K.

    1981-01-01

    An automatic technique has been developed to measure marsh plant production by inference from a species classification derived from Landsat MSS data. A separate computer technique has been developed to calculate the transport path length of detritus and nutrients from their point of origin in the marsh to the shoreline from Landsat data. A nutrient availability indicator, the ratio of production to transport path length, was derived for each marsh-identified Landsat cell. The use of a data base compatible with the Landsat format facilitated data handling and computations.

  2. Progress of projection computed tomography by upgrading of the beamline 37XU of SPring-8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terada, Yasuko, E-mail: yterada@spring8.or.jp; Suzuki, Yoshio; Uesugi, Kentaro

    2016-01-28

    Beamline 37XU at SPring-8 has been upgraded for nano-focusing applications. The length of the beamline has been extended to 80 m. By utilizing this length, the beamline has advantages for experiments such as X-ray focusing, X-ray microscopic imaging and X-ray computed tomography. Projection computed tomography measurements were carried out at experimental hutch 3 located 80 m from the light source. CT images of a microcapsule have been successfully obtained with a wide X-ray energy range.

  3. iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees

    PubMed Central

    Rosenberg, Noah A.

    2012-01-01

    Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756

  4. An efficient algorithm for computing fixed length attractors based on bounded model checking in synchronous Boolean networks with biochemical applications.

    PubMed

    Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N

    2015-04-28

    Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.

  5. Superstructures and multijunction cells for high efficiency energy conversion

    NASA Technical Reports Server (NTRS)

    Wagner, M.; Leburton, J. P.

    1985-01-01

    Potential applications of superlattices to photovoltaic structures are discussed. A single-bandgap, multijunction cell with selective electrodes for lateral transport of collected carriers is proposed. The concept is based on similar doping superlattice (NIPI) structures. Computer simulations show that by reducing bulk recombination losses, the spectral response of such cells is enhanced, particularly for poor quality materials with short diffusion lengths. Dark current contributions of additional junctions result in a trade-off between short-circuit current and open-circuit voltage as the number of layers is increased. One or two extra junctions appear to be optimal.

  6. Scattering matrix of arbitrary tight-binding Hamiltonians

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramírez, C., E-mail: carlos@ciencias.unam.mx; Medina-Amayo, L.A.

    2017-03-15

    A novel efficient method to calculate the scattering matrix (SM) of arbitrary tight-binding Hamiltonians is proposed, including cases with multiterminal structures. In particular, the SM of two kinds of fundamental structures is given, which can be used to obtain the SM of bigger systems iteratively. Also, a procedure to obtain the SM of layer-composed periodic leads is described. This method allows renormalization approaches, which permits computations over macroscopic length systems without introducing additional approximations. Finally, the transmission coefficient of a ring-shaped multiterminal system and the transmission function of a square-lattice nanoribbon with a reduced width region are calculated.

  7. A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map

    NASA Astrophysics Data System (ADS)

    Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng

    2017-06-01

    The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.

  8. Statistical Mechanics Provides Novel Insights into Microtubule Stability and Mechanism of Shrinkage

    PubMed Central

    Jain, Ishutesh; Inamdar, Mandar M.; Padinhateeri, Ranjith

    2015-01-01

    Microtubules are nano-machines that grow and shrink stochastically, making use of the coupling between chemical kinetics and mechanics of its constituent protofilaments (PFs). We investigate the stability and shrinkage of microtubules taking into account inter-protofilament interactions and bending interactions of intrinsically curved PFs. Computing the free energy as a function of PF tip position, we show that the competition between curvature energy, inter-PF interaction energy and entropy leads to a rich landscape with a series of minima that repeat over a length-scale determined by the intrinsic curvature. Computing Langevin dynamics of the tip through the landscape and accounting for depolymerization, we calculate the average unzippering and shrinkage velocities of GDP protofilaments and compare them with the experimentally known results. Our analysis predicts that the strength of the inter-PF interaction (Ems) has to be comparable to the strength of the curvature energy (Emb) such that Ems−Emb≈1kBT, and questions the prevalent notion that unzippering results from the domination of bending energy of curved GDP PFs. Our work demonstrates how the shape of the free energy landscape is crucial in explaining the mechanism of MT shrinkage where the unzippered PFs will fluctuate in a set of partially peeled off states and subunit dissociation will reduce the length. PMID:25692909

  9. Stimulated Brillouin Scattering Phase Conjugation in Fiber Optic Waveguides

    DTIC Science & Technology

    2008-07-01

    61] The discrepancy is reduced since the effective length of the interaction may be limited by the coherence length of the signal laser as in Eq...these cases, the coherence length of the pulsed laser typically limits the effective length of the Brillouin scattering interaction. Long... coherence length lasers with long fiber SBS media have been used to reduce threshold energy, but as indicated at the end of Chapter 2, this has produced

  10. Drag reduction in plane Couette flow of dilute polymer solutions

    NASA Astrophysics Data System (ADS)

    Liu, Nansheng; Teng, Hao; Lu, Xiyun; Khomami, Bamin

    2017-11-01

    Drag reduction (DR) in the plane Couette flow (PCF) by the addition of flexible polymers has been studied by direct numerical simulation (DNS) in this work. Special interest has been directed to explore the similarity and difference in the DR features between the PCF and the plane Poiseuille flow (PPF), and to clarify the effects of large-scale structures (LSSs) on the near-wall turbulence. It has been demonstrated that in the near-wall region the drag-reduced PCF shares typical DR features similar to those reported for the drag-reduced PPF (White & Mungal 2008; Graham 2014), however in the core region intriguing differences are found between these two DR shear flows of polymeric solution. Specifically, in the core region of the drag-reduced PCF, the polymer chains are stretched substantial and absorb kinetic energy from the turbulent fluctuations. In commensurate, peak values of conformation tensor components Cyy and Czz occur in the core region. This finding is strikingly different from that of the drag-reduced PPF. For the drag-reduced PCF, the LSSs are found to have monotonically increasing effects on the near-wall flow as the Weissenberg number increases, and have their spanwise length scale unchanged. This work is supported by the NSFC Grants 11272306 and 11472268 and the NSF Grant CBET0755269. This research was also supported in part by allocation of advanced computational resources on DARTER by the National Institute for Computational Sciences (NICS).

  11. Apparatus and method for classifying fuel pellets for nuclear reactor

    DOEpatents

    Wilks, Robert S.; Sternheim, Eliezer; Breakey, Gerald A.; Sturges, Jr., Robert H.; Taleff, Alexander; Castner, Raymond P.

    1984-01-01

    Control for the operation of a mechanical handling and gauging system for nuclear fuel pellets. The pellets are inspected for diameters, lengths, surface flaws and weights in successive stations. The control includes, a computer for commanding the operation of the system and its electronics and for storing and processing the complex data derived at the required high rate. In measuring the diameter, the computer enables the measurement of a calibration pellet, stores that calibration data and computes and stores diameter-correction factors and their addresses along a pellet. To each diameter measurement a correction factor is applied at the appropriate address. The computer commands verification that all critical parts of the system and control are set for inspection and that each pellet is positioned for inspection. During each cycle of inspection, the measurement operation proceeds normally irrespective of whether or not a pellet is present in each station. If a pellet is not positioned in a station, a measurement is recorded, but the recorded measurement indicates maloperation. In measuring diameter and length a light pattern including successive shadows of slices transverse for diameter or longitudinal for length are projected on a photodiode array. The light pattern is scanned electronically by a train of pulses. The pulses are counted during the scan of the lighted diodes. For evaluation of diameter the maximum diameter count and the number of slices for which the diameter exceeds a predetermined minimum is determined. For acceptance, the maximum must be less than a maximum level and the minimum must exceed a set number. For evaluation of length, the maximum length is determined. For acceptance, the length must be within maximum and minimum limits.

  12. Building A Community Focused Data and Modeling Collaborative platform with Hardware Virtualization Technology

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.

    2009-12-01

    As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.

  13. History-dependence of muscle slack length following contraction and stretch in the human vastus lateralis.

    PubMed

    Stubbs, Peter W; Walsh, Lee D; D'Souza, Arkiev; Héroux, Martin E; Bolsterlee, Bart; Gandevia, Simon C; Herbert, Robert D

    2018-06-01

    In reduced muscle preparations, the slack length and passive stiffness of muscle fibres have been shown to be influenced by previous muscle contraction or stretch. In human muscles, such behaviours have been inferred from measures of muscle force, joint stiffness and reflex magnitudes and latencies. Using ultrasound imaging, we directly observed that isometric contraction of the vastus lateralis muscle at short lengths reduces the slack lengths of the muscle-tendon unit and muscle fascicles. The effect is apparent 60 s after the contraction. These observations imply that muscle contraction at short lengths causes the formation of bonds which reduce the effective length of structures that generate passive tension in muscles. In reduced muscle preparations, stretch and muscle contraction change the properties of relaxed muscle fibres. In humans, effects of stretch and contraction on properties of relaxed muscles have been inferred from measurements of time taken to develop force, joint stiffness and reflex latencies. The current study used ultrasound imaging to directly observe the effects of stretch and contraction on muscle-tendon slack length and fascicle slack length of the human vastus lateralis muscle in vivo. The muscle was conditioned by (a) strong isometric contractions at long muscle-tendon lengths, (b) strong isometric contractions at short muscle-tendon lengths, (c) weak isometric contractions at long muscle-tendon lengths and (d) slow stretches. One minute after conditioning, ultrasound images were acquired from the relaxed muscle as it was slowly lengthened through its physiological range. The ultrasound image sequences were used to identify muscle-tendon slack angles and fascicle slack lengths. Contraction at short muscle-tendon lengths caused a mean 13.5 degree (95% CI 11.8-15.0 degree) shift in the muscle-tendon slack angle towards shorter muscle-tendon lengths, and a mean 5 mm (95% CI 2-8 mm) reduction in fascicle slack length, compared to the other conditions. A supplementary experiment showed the effect could be demonstrated if the muscle was conditioned by contraction at short lengths but not if the relaxed muscle was held at short lengths, confirming the role of muscle contraction. These observations imply that muscle contraction at short lengths causes the formation of bonds which reduce the effective length of structures that generate passive tension in muscles. © 2018 The Authors. The Journal of Physiology © 2018 The Physiological Society.

  14. Computer-delivered interventions for reducing alcohol consumption: meta-analysis and meta-regression using behaviour change techniques and theory.

    PubMed

    Black, Nicola; Mullan, Barbara; Sharpe, Louise

    2016-09-01

    The current aim was to examine the effectiveness of behaviour change techniques (BCTs), theory and other characteristics in increasing the effectiveness of computer-delivered interventions (CDIs) to reduce alcohol consumption. Included were randomised studies with a primary aim of reducing alcohol consumption, which compared self-directed CDIs to assessment-only control groups. CDIs were coded for the use of 42 BCTs from an alcohol-specific taxonomy, the use of theory according to a theory coding scheme and general characteristics such as length of the CDI. Effectiveness of CDIs was assessed using random-effects meta-analysis and the association between the moderators and effect size was assessed using univariate and multivariate meta-regression. Ninety-three CDIs were included in at least one analysis and produced small, significant effects on five outcomes (d+ = 0.07-0.15). Larger effects occurred with some personal contact, provision of normative information or feedback on performance, prompting commitment or goal review, the social norms approach and in samples with more women. Smaller effects occurred when information on the consequences of alcohol consumption was provided. These findings can be used to inform both intervention- and theory-development. Intervention developers should focus on, including specific, effective techniques, rather than many techniques or more-elaborate approaches.

  15. Advancing the efficiency and efficacy of patient reported outcomes with multivariate computer adaptive testing.

    PubMed

    Morris, Scott; Bass, Mike; Lee, Mirinae; Neapolitan, Richard E

    2017-09-01

    The Patient Reported Outcomes Measurement Information System (PROMIS) initiative developed an array of patient reported outcome (PRO) measures. To reduce the number of questions administered, PROMIS utilizes unidimensional item response theory and unidimensional computer adaptive testing (UCAT), which means a separate set of questions is administered for each measured trait. Multidimensional item response theory (MIRT) and multidimensional computer adaptive testing (MCAT) simultaneously assess correlated traits. The objective was to investigate the extent to which MCAT reduces patient burden relative to UCAT in the case of PROs. One MIRT and 3 unidimensional item response theory models were developed using the related traits anxiety, depression, and anger. Using these models, MCAT and UCAT performance was compared with simulated individuals. Surprisingly, the root mean squared error for both methods increased with the number of items. These results were driven by large errors for individuals with low trait levels. A second analysis focused on individuals aligned with item content. For these individuals, both MCAT and UCAT accuracies improved with additional items. Furthermore, MCAT reduced the test length by 50%. For the PROMIS Emotional Distress banks, neither UCAT nor MCAT provided accurate estimates for individuals at low trait levels. Because the items in these banks were designed to detect clinical levels of distress, there is little information for individuals with low trait values. However, trait estimates for individuals targeted by the banks were accurate and MCAT asked substantially fewer questions. By reducing the number of items administered, MCAT can allow clinicians and researchers to assess a wider range of PROs with less patient burden. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Development and Psychometric Evaluation of a New Instrument for Measuring Sleep Length and Television and Computer Habits of Swedish School-Age Children

    ERIC Educational Resources Information Center

    Garmy, Pernilla; Jakobsson, Ulf; Nyberg, Per

    2012-01-01

    The aim was to develop a new instrument for measuring length of sleep as well as television and computer habits in school-age children. A questionnaire was constructed for use when children visit the school health care unit. Three aspects of the validity of the questionnaire were examined: its face validity, content validity, and construct…

  17. Design Matters: The Impact of CAPI on Interview Length

    ERIC Educational Resources Information Center

    Watson, Nicole; Wilkins, Roger

    2015-01-01

    Computer-assisted personal interviewing (CAPI) offers many attractive benefits over paper-and-pencil interviewing. There is, however, mixed evidence on the impact of CAPI on interview "length," an important survey outcome in the context of length limits imposed by survey budgets and concerns over respondent burden. In this article,…

  18. Walking velocity and step length adjustments affect knee joint contact forces in healthy weight and obese adults.

    PubMed

    Milner, Clare E; Meardon, Stacey A; Hawkins, Jillian L; Willson, John D

    2018-04-28

    Knee osteoarthritis is a major public health problem and adults with obesity are particularly at risk. One approach to alleviating this problem is to reduce the mechanical load at the joint during daily activity. Adjusting temporospatial parameters of walking could mitigate cumulative knee joint mechanical loads. The purpose of this study was to determine how adjustments to velocity and step length affects knee joint loading in healthy weight adults and adults with obesity. We collected three-dimensional gait analysis data on 10 adults with a normal body mass index and 10 adults with obesity during over ground walking in nine different conditions. In addition to preferred velocity and step length, we also conducted combinations of 15% increased and decreased velocity and step length. Peak tibiofemoral joint impulse and knee adduction angular impulse were reduced in the decreased step length conditions in both healthy weight adults (main effect) and those with obesity (interaction effect). Peak knee joint adduction moment was also reduced with decreased step length, and with decreased velocity in both groups. We conclude from these results that adopting shorter step lengths during daily activity and when walking for exercise can reduce mechanical stimuli associated with articular cartilage degenerative processes in adults with and without obesity. Thus, walking with reduced step length may benefit adults at risk for disability due to knee osteoarthritis. Adopting a shorter step length during daily walking activity may reduce knee joint loading and thus benefit those at risk for knee cartilage degeneration. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 9999:XX-XX, 2018. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  19. Using Video Interaction Guidance to Develop Intrapersonal and Interpersonal Skills in Professional Training for Educational Psychologists

    ERIC Educational Resources Information Center

    Hayes, Ben; Dewey, Jessica; Sancho, Michelle

    2014-01-01

    In this study we assessed the effects of paragraph length on the reading speed and comprehension of students. Students were randomly assigned to one of three groups: short paragraph length (SPL), medium paragraph length (MPL), or long paragraph length (LPL). Students read a 1423 word text on a computer screen formatted to align with their group…

  20. Reduced 3d modeling on injection schemes for laser wakefield acceleration at plasma scale lengths

    NASA Astrophysics Data System (ADS)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2017-10-01

    Current modelling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) codes which are computationally demanding. In PIC simulations the laser wavelength λ0, in μm-range, has to be resolved over the acceleration lengths in meter-range. A promising approach is the ponderomotive guiding center solver (PGC) by only considering the laser envelope for laser pulse propagation. Therefore only the plasma skin depth λp has to be resolved, leading to speedups of (λp /λ0) 2. This allows to perform a wide-range of parameter studies and use it for λ0 <<λp studies. We present the 3d version of a PGC solver in the massively parallel, fully relativistic PIC code OSIRIS. Further, a discussion and characterization of the validity of the PGC solver for injection schemes on the plasma scale lengths, such as down-ramp injection, magnetic injection and ionization injection, through parametric studies, full PIC simulations and theoretical scaling, is presented. This work was partially supported by Fundacao para a Ciencia e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014 and PD/BD/105882/2014.

  1. Microscopic theory of topologically entangled fluids of rigid macromolecules

    NASA Astrophysics Data System (ADS)

    Sussman, Daniel M.; Schweizer, Kenneth S.

    2011-06-01

    We present a first-principles theory for the slow dynamics of a fluid of entangling rigid crosses of zero excluded volume based on a generalization of the dynamic mean-field approach of Szamel for infinitely thin nonrotating rods. The latter theory exactly includes topological constraints at the two-body collision level and self-consistently renormalizes an effective diffusion tensor to account for many-body effects. Remarkably, it predicts scaling laws consistent with the phenomenological reptation-tube predictions of Doi and Edwards for the long-time diffusion and the localization length in the heavily entangled limit. We generalize this approach to a different macromolecular architecture, infinitely thin three-dimensional crosses, and also extend the range of densities over which a dynamic localization length can be calculated for rods. Ideal gases of nonrotating crosses have recently received attention in computer simulations and are relevant as a simple model of both a strong-glass former and entangling star-branched polymers. Comparisons of our theory with these simulations reveal reasonable agreement for the magnitude and reduced density dependence of the localization length and also the self-diffusion constant if the consequences of local density fluctuations are taken into account.

  2. Relationship between the Prediction Accuracy of Tsunami Inundation and Relative Distribution of Tsunami Source and Observation Arrays: A Case Study in Tokyo Bay

    NASA Astrophysics Data System (ADS)

    Takagawa, T.

    2017-12-01

    A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early stage of reaction.

  3. SU-F-T-65: AutomaticTreatment Planning for High-Dose Rate (HDR) Brachytherapy with a VaginalCylinder Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Y; Tan, J; Jiang, S

    Purpose: High dose rate (HDR) brachytherapy treatment planning is conventionally performed in a manual fashion. Yet it is highly desirable to perform computerized automated planning to improve treatment planning efficiency, eliminate human errors, and reduce plan quality variation. The goal of this research is to develop an automatic treatment planning tool for HDR brachytherapy with a cylinder applicator for vaginal cancer. Methods: After inserting the cylinder applicator into the patient, a CT scan was acquired and was loaded to an in-house developed treatment planning software. The cylinder applicator was automatically segmented using image-processing techniques. CTV was generated based on user-specifiedmore » treatment depth and length. Locations of relevant points (apex point, prescription point, and vaginal surface point), central applicator channel coordinates, and dwell positions were determined according to their geometric relations with the applicator. Dwell time was computed through an inverse optimization process. The planning information was written into DICOM-RT plan and structure files to transfer the automatically generated plan to a commercial treatment planning system for plan verification and delivery. Results: We have tested the system retrospectively in nine patients treated with vaginal cylinder applicator. These cases were selected with different treatment prescriptions, lengths, depths, and cylinder diameters to represent a large patient population. Our system was able to generate treatment plans for these cases with clinically acceptable quality. Computation time varied from 3–6 min. Conclusion: We have developed a system to perform automated treatment planning for HDR brachytherapy with a cylinder applicator. Such a novel system has greatly improved treatment planning efficiency and reduced plan quality variation. It also served as a testbed to demonstrate the feasibility of automatic HDR treatment planning for more complicated cases.« less

  4. Reduced step length reduces knee joint contact forces during running following anterior cruciate ligament reconstruction but does not alter inter-limb asymmetry.

    PubMed

    Bowersock, Collin D; Willy, Richard W; DeVita, Paul; Willson, John D

    2017-03-01

    Anterior cruciate ligament reconstruction is associated with early onset knee osteoarthritis. Running is a typical activity following this surgery, but elevated knee joint contact forces are thought to contribute to osteoarthritis degenerative processes. It is therefore clinically relevant to identify interventions to reduce contact forces during running among individuals after anterior cruciate ligament reconstruction. The primary purpose of this study was to evaluate the effect of reducing step length during running on patellofemoral and tibiofemoral joint contact forces among people with a history of anterior cruciate ligament reconstruction. Inter limb knee joint contact force differences during running were also examined. 18 individuals at an average of 54.8months after unilateral anterior cruciate ligament reconstruction ran in 3 step length conditions (preferred, -5%, -10%). Bilateral patellofemoral, tibiofemoral, and medial tibiofemoral compartment peak force, loading rate, impulse, and impulse per kilometer were evaluated between step length conditions and limbs using separate 2 factor analyses of variance. Reducing step length 5% decreased patellofemoral, tibiofemoral, and medial tibiofemoral compartment peak force, impulse, and impulse per kilometer bilaterally. A 10% step length reduction further decreased peak forces and force impulses, but did not further reduce force impulses per kilometer. Tibiofemoral joint impulse, impulse per kilometer, and patellofemoral joint loading rate were lower in the previously injured limb compared to the contralateral limb. Running with a shorter step length is a feasible clinical intervention to reduce knee joint contact forces during running among people with a history of anterior cruciate ligament reconstruction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy

    NASA Astrophysics Data System (ADS)

    Ajdari, Ali; Ghate, Archis; Kim, Minsun

    2018-04-01

    Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.

  6. Internode length is reduced during myelination and remyelination by neurofilament medium phosphorylation in motor axons.

    PubMed

    Villalón, Eric; Barry, Devin M; Byers, Nathan; Frizzi, Katie; Jones, Maria R; Landayan, Dan S; Dale, Jeffrey M; Downer, Natalie L; Calcutt, Nigel A; Garcia, Michael L

    2018-05-14

    The distance between nodes of Ranvier, referred to as internode length, positively correlates with axon diameter, and is optimized during development to ensure maximal neuronal conduction velocity. Following myelin loss, internode length is reestablished through remyelination. However, remyelination results in short internode lengths and reduced conduction rates. We analyzed the potential role of neurofilament phosphorylation in regulating internode length during remyelination and myelination. Following ethidium bromide induced demyelination, levels of neurofilament medium (NF-M) and heavy (NF-H) phosphorylation were unaffected. Preventing NF-M lysine-serine-proline (KSP) repeat phosphorylation increased internode length by 30% after remyelination. To further analyze the role of NF-M phosphorylation in regulating internode length, gene replacement was used to produce mice in which all KSP serine residues were replaced with glutamate to mimic constitutive phosphorylation. Mimicking constitutive KSP phosphorylation reduced internode length by 16% during myelination and motor nerve conduction velocity by ~27% without altering sensory nerve structure or function. Our results suggest that NF-M KSP phosphorylation is part of a cooperative mechanism between axons and Schwann cells that together determine internode length, and suggest motor and sensory axons utilize different mechanisms to establish internode length. Copyright © 2018. Published by Elsevier Inc.

  7. Lower airway dimensions in pediatric patients-A computed tomography study.

    PubMed

    Szelloe, Patricia; Weiss, Markus; Schraner, Thomas; Dave, Mital H

    2017-10-01

    The aim of this study was to obtain lower airway dimensions in children by means of computed tomography (CT). Chest CT scans from 195 pediatric patients (118 boys/77 girls) aged 0.04-15.99 years were analyzed. Tracheal and bronchial lengths, anterior-posterior and lateral diameters, as well as cross-sectional area were assessed at the following levels: mid trachea, right proximal and distal bronchus, proximal bronchus intermedius, and left proximal and distal bronchus. Mediastinal angles of tracheal bifurcation were measured. Data were analyzed by means of linear and polynomial regression plots. The strongest correlations were found between tracheal and bronchial diameters and age as well as between tracheal and bronchial lengths and body length. All measured airway parameters correlated poorly to body weight. Bronchial angles revealed no association with patient's age, body length, or weight. This comprehensive anatomical database of lower airway dimensions demonstrates that tracheal and bronchial diameters correlate better to age, and that tracheal and bronchial length correlate better to body length. All measured airway parameters correlated poorly to body weight. © 2017 John Wiley & Sons Ltd.

  8. Comparison of sound power radiation from isolated airfoils and cascades in a turbulent flow.

    PubMed

    Blandeau, Vincent P; Joseph, Phillip F; Jenkins, Gareth; Powles, Christopher J

    2011-06-01

    An analytical model of the sound power radiated from a flat plate airfoil of infinite span in a 2D turbulent flow is presented. The effects of stagger angle on the radiated sound power are included so that the sound power radiated upstream and downstream relative to the fan axis can be predicted. Closed-form asymptotic expressions, valid at low and high frequencies, are provided for the upstream, downstream, and total sound power. A study of the effects of chord length on the total sound power at all reduced frequencies is presented. Excellent agreement for frequencies above a critical frequency is shown between the fast analytical isolated airfoil model presented in this paper and an existing, computationally demanding, cascade model, in which the unsteady loading of the cascade is computed numerically. Reasonable agreement is also observed at low frequencies for low solidity cascade configurations. © 2011 Acoustical Society of America

  9. High temperature phonon dispersion in graphene using classical molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anees, P., E-mail: anees@igcar.gov.in; Panigrahi, B. K.; Valsakumar, M. C., E-mail: anees@igcar.gov.in

    2014-04-24

    Phonon dispersion and phonon density of states of graphene are calculated using classical molecular dynamics simulations. In this method, the dynamical matrix is constructed based on linear response theory by computing the displacement of atoms during the simulations. The computed phonon dispersions show excellent agreement with experiments. The simulations are done in both NVT and NPT ensembles at 300 K and found that the LO/TO modes are getting hardened at the Γ point. The NPT ensemble simulations capture the anharmonicity of the crystal accurately and the hardening of LO/TO modes is more pronounced. We also found that at 300 Kmore » the C-C bond length reduces below the equilibrium value and the ZA bending mode frequency becomes imaginary close to Γ along K-Γ direction, which indicates instability of the flat 2D graphene sheets.« less

  10. One-dimensional nonlinear theory for rectangular helix traveling-wave tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Chengfang, E-mail: fchffchf@126.com; Zhao, Bo; Yang, Yudong

    A 1-D nonlinear theory of a rectangular helix traveling-wave tube (TWT) interacting with a ribbon beam is presented in this paper. The RF field is modeled by a transmission line equivalent circuit, the ribbon beam is divided into a sequence of thin rectangular electron discs with the same cross section as the beam, and the charges are assumed to be uniformly distributed over these discs. Then a method of computing the space-charge field by solving Green's Function in the Cartesian Coordinate-system is fully described. Nonlinear partial differential equations for field amplitudes and Lorentz force equations for particles are solved numericallymore » using the fourth-order Runge-Kutta technique. The tube's gain, output power, and efficiency of the above TWT are computed. The results show that increasing the cross section of the ribbon beam will improve a rectangular helix TWT's efficiency and reduce the saturated length.« less

  11. Slack length reduces the contractile phenotype of the Swine carotid artery.

    PubMed

    Rembold, Christopher M; Garvey, Sean M; Tejani, Ankit D

    2013-01-01

    Contraction is the primary function of adult arterial smooth muscle. However, in response to vessel injury or inflammation, arterial smooth muscle is able to phenotypically modulate from the contractile state to several 'synthetic' states characterized by proliferation, migration and/or increased cytokine secretion. We examined the effect of tissue length (L) on the phenotype of intact, isometrically held, initially contractile swine carotid artery tissues. Tissues were studied (1) without prolonged incubation at the optimal length for force generation (1.0 Lo, control), (2) with prolonged incubation for 17 h at 1.0 Lo, or (3) with prolonged incubation at slack length (0.6 Lo) for 16 h and then restoration to 1.0 Lo for 1 h. Prolonged incubation at 1.0 Lo minimally reduced the contractile force without substantially altering the mediators of contraction (crossbridge phosphorylation, shortening velocity or stimulated actin polymerization). Prolonged incubation of tissues at slack length (0.6 Lo), despite return of length to 1.0 Lo, substantially reduced contractile force, reduced crossbridge phosphorylation, nearly abolished crossbridge cycling (shortening velocity) and abolished stimulated actin polymerization. These data suggest that (1) slack length treatment significantly alters the contractile phenotype of arterial tissue, and (2) slack length treatment is a model to study acute phenotypic modulation of intact arterial smooth muscle. Copyright © 2013 S. Karger AG, Basel.

  12. Better Diet Quality during Pregnancy Is Associated with a Reduced Likelihood of an Infant Born Small for Gestational Age: An Analysis of the Prospective New Hampshire Birth Cohort Study.

    PubMed

    Emond, Jennifer A; Karagas, Margaret R; Baker, Emily R; Gilbert-Diamond, Diane

    2018-01-01

    Birth weight has a U-shaped relation with chronic disease. Diet quality during pregnancy may impact fetal growth and infant birth weight, yet findings are inconclusive. We examined the relation between maternal diet quality during pregnancy and infant birth size among women enrolled in a prospective birth cohort. Women 18-45 y old with a singleton pregnancy were recruited at 24-28 wk of gestation from prenatal clinics in New Hampshire. Women completed a validated food frequency questionnaire at enrollment. Diet quality was computed as adherence to the Alternative Healthy Eating Index. Infant birth outcomes (sex, head circumference, weight, and length) were extracted from medical records. Weight-for-length z scores, low birth weight, macrosomia, and size for gestational age [small for gestational age (SGA) or large for gestational age (LGA)] were computed. Multivariable regression models fit each outcome on quartiles of diet quality, adjusted for covariates. Models were computed overall and stratified by smoking status. Analyses included 862 women and infants with complete data. Lower diet quality was associated with lower maternal education, being a smoker, prepregnancy obesity status, and lack of exercise during pregnancy. Overall, 3.4% of infants were born with a low birth weight, 12.1% with macrosomia, 4.6% were SGA, and 8.7% were LGA. In an adjusted model, increased diet quality appeared linearly associated with a reduced likelihood of SGA (P-trend = 0.03), although each quartile comparison did not reach statistical significance. Specifically, ORs for SGA were 0.89 (95% CI: 0.37, 2.15), 0.73 (95% CI: 0.28, 1.89), and 0.35 (95% CI: 0.11, 1.08) for each increasing quartile of diet quality compared to the lowest quartile. Similar trends for SGA were observed among non-smokers (n = 756; P-trend = 0.07). Also among non-smokers, increased diet quality was associated with lower infant birth weight (P-trend = 0.03) and a suggested reduction in macrosomia (P-trend = 0.07). Increased diet quality during pregnancy was related to a reduced risk of SGA in this cohort of pregnant women from New Hampshire. Additional studies are needed to elucidate the relation between maternal diet quality and macrosomia. © 2018 American Society for Nutrition. All rights reserved.

  13. Slicing for Biology.

    ERIC Educational Resources Information Center

    Ekstrom, James

    2001-01-01

    Advocates using computer imaging technology to assist students in doing projects in which determining density is important. Students can study quantitative comparisons of masses, lengths, and widths using computer software. Includes figures displaying computer images of shells, yeast cultures, and the Aral Sea. (SAH)

  14. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  15. Moment method analysis of linearly tapered slot antennas: Low loss components for switched beam radiometers

    NASA Technical Reports Server (NTRS)

    Koeksal, Adnan; Trew, Robert J.; Kauffman, J. Frank

    1992-01-01

    A Moment Method Model for the radiation pattern characterization of single Linearly Tapered Slot Antennas (LTSA) in air or on a dielectric substrate is developed. This characterization consists of: (1) finding the radiated far-fields of the antenna; (2) determining the E-Plane and H-Plane beamwidths and sidelobe levels; and (3) determining the D-Plane beamwidth and cross polarization levels, as antenna parameters length, height, taper angle, substrate thickness, and the relative substrate permittivity vary. The LTSA geometry does not lend itself to analytical solution with the given parameter ranges. Therefore, a computer modeling scheme and a code are necessary to analyze the problem. This necessity imposes some further objectives or requirements on the solution method (modeling) and tool (computer code). These may be listed as follows: (1) a good approximation to the real antenna geometry; and (2) feasible computer storage and time requirements. According to these requirements, the work is concentrated on the development of efficient modeling schemes for these type of problems and on reducing the central processing unit (CPU) time required from the computer code. A Method of Moments (MoM) code is developed for the analysis of LTSA's within the parameter ranges given.

  16. Computer-assisted total hip arthroplasty: coding the next generation of navigation systems for orthopedic surgery.

    PubMed

    Renkawitz, Tobias; Tingart, Markus; Grifka, Joachim; Sendtner, Ernst; Kalteis, Thomas

    2009-09-01

    This article outlines the scientific basis and a state-of-the-art application of computer-assisted orthopedic surgery in total hip arthroplasty (THA) and provides a future perspective on this technology. Computer-assisted orthopedic surgery in primary THA has the potential to couple 3D simulations with real-time evaluations of surgical performance, which has brought these developments from the research laboratory all the way to clinical use. Nonimage- or imageless-based navigation systems without the need for additional pre- or intra-operative image acquisition have stood the test to significantly reduce the variability in positioning the acetabular component and have shown precise measurement of leg length and offset changes during THA. More recently, computer-assisted orthopedic surgery systems have opened a new frontier for accurate surgical practice in minimally invasive, tissue-preserving THA. The future generation of imageless navigation systems will switch from simple measurement tasks to real navigation tools. These software algorithms will consider the cup and stem as components of a coupled biomechanical system, navigating the orthopedic surgeon to find an optimized complementary component orientation rather than target values intraoperatively, and are expected to have a high impact on clinical practice and postoperative functionality in modern THA.

  17. Analytic energy gradients for orbital-optimized MP3 and MP2.5 with the density-fitting approximation: An efficient implementation.

    PubMed

    Bozkaya, Uğur

    2018-03-15

    Efficient implementations of analytic gradients for the orbital-optimized MP3 and MP2.5 and their standard versions with the density-fitting approximation, which are denoted as DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5, are presented. The DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5 methods are applied to a set of alkanes and noncovalent interaction complexes to compare the computational cost with the conventional MP3, MP2.5, OMP3, and OMP2.5. Our results demonstrate that density-fitted perturbation theory (DF-MP) methods considered substantially reduce the computational cost compared to conventional MP methods. The efficiency of our DF-MP methods arise from the reduced input/output (I/O) time and the acceleration of gradient related terms, such as computations of particle density and generalized Fock matrices (PDMs and GFM), solution of the Z-vector equation, back-transformations of PDMs and GFM, and evaluation of analytic gradients in the atomic orbital basis. Further, application results show that errors introduced by the DF approach are negligible. Mean absolute errors for bond lengths of a molecular set, with the cc-pCVQZ basis set, is 0.0001-0.0002 Å. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Evidence of MAOA genotype involvement in spatial ability in males

    PubMed Central

    Mueller, Sven C.; Cornwell, Brian R.; Grillon, Christian; MacIntyre, Jessica; Gorodetsky, Elena; Goldman, David; Pine, Daniel S.; Ernst, Monique

    2014-01-01

    Although the Monoamine Oxidase-A (MAOA) gene has been linked to spatial learning and memory in animal models, convincing evidence in humans is lacking. Performance on an ecologically-valid, virtual computer-based equivalent of the Morris Water Maze task was compared between 28 healthy males with the low MAOA transcriptional activity and 41 healthy age- and IQ-matched males with the high MAOA transcriptional activity. The results revealed consistently better performance (reduced heading error, shorter path length, and reduced failed trials) for the high MAOA activity individuals relative to the low activity individuals. By comparison, groups did not differ on pre-task variables or strategic measures such as first-move latency. The results provide novel evidence of MAOA gene involvement in human spatial navigation using a virtual analogue of the Morris Water Maze task. PMID:24671068

  19. Reducing the length of postnatal hospital stay: implications for cost and quality of care.

    PubMed

    Bowers, John; Cheyne, Helen

    2016-01-15

    UK health services are under pressure to make cost savings while maintaining quality of care. Typically reducing the length of time patients stay in hospital and increasing bed occupancy are advocated to achieve service efficiency. Around 800,000 women give birth in the UK each year making maternity care a high volume, high cost service. Although average length of stay on the postnatal ward has fallen substantially over the years there is pressure to make still further reductions. This paper explores and discusses the possible cost savings of further reductions in length of stay, the consequences for postnatal services in the community, and the impact on quality of care. We draw on a range of pre-existing data sources including, national level routinely collected data, workforce planning data and data from national surveys of women's experience. Simulation and a financial model were used to estimate excess demand, work intensity and bed occupancy to explore the quantitative, organisational consequences of reducing the length of stay. These data are discussed in relation to findings of national surveys to draw inferences about potential impacts on cost and quality of care. Reducing the length of time women spend in hospital after birth implies that staff and bed numbers can be reduced. However, the cost savings may be reduced if quality and access to services are maintained. Admission and discharge procedures are relatively fixed and involve high cost, trained staff time. Furthermore, it is important to retain a sufficient bed contingency capacity to ensure a reasonable level of service. If quality of care is maintained, staffing and bed capacity cannot be simply reduced proportionately: reducing average length of stay on a typical postnatal ward by six hours or 17% would reduce costs by just 8%. This might still be a significant saving over a high volume service however, earlier discharge results in more women and babies with significant care needs at home. Quality and safety of care would also require corresponding increases in community based postnatal care. Simply reducing staffing in proportion to the length of stay increases the workload for each staff member resulting in poorer quality of care and increased staff stress. Many policy debates, such as that about the length of postnatal hospital-stay, demand consideration of multiple dimensions. This paper demonstrates how diverse data sources and techniques can be integrated to provide a more holistic analysis. Our study suggests that while earlier discharge from the postnatal ward may achievable, it may not generate all of the anticipated cost savings. Some useful savings may be realised but if staff and bed capacity are simply reduced in proportion to the length of stay, care quality may be compromised.

  20. Changes in running kinematics, kinetics, and spring-mass behavior over a 24-h run.

    PubMed

    Morin, Jean-Benoît; Samozino, Pierre; Millet, Guillaume Y

    2011-05-01

    This study investigated the changes in running mechanics and spring-mass behavior over a 24-h treadmill run (24TR). Kinematics, kinetics, and spring-mass characteristics of the running step were assessed in 10 experienced ultralong-distance runners before, every 2 h, and after a 24TR using an instrumented treadmill dynamometer. These measurements were performed at 10 km·h, and mechanical parameters were sampled at 1000 Hz for 10 consecutive steps. Contact and aerial times were determined from ground reaction force (GRF) signals and used to compute step frequency. Maximal GRF, loading rate, downward displacement of the center of mass, and leg length change during the support phase were determined and used to compute both vertical and leg stiffness. Subjects' running pattern and spring-mass behavior significantly changed over the 24TR with a 4.9% higher step frequency on average (because of a significantly 4.5% shorter contact time), a lower maximal GRF (by 4.4% on average), a 13.0% lower leg length change during contact, and an increase in both leg and vertical stiffness (+9.9% and +8.6% on average, respectively). Most of these changes were significant from the early phase of the 24TR (fourth to sixth hour of running) and could be speculated as contributing to an overall limitation of the potentially harmful consequences of such a long-duration run on subjects' musculoskeletal system. During a 24TR, the changes in running mechanics and spring-mass behavior show a clear shift toward a higher oscillating frequency and stiffness, along with lower GRF and leg length change (hence a reduced overall eccentric load) during the support phase of running. © 2011 by the American College of Sports Medicine

  1. Free-Space Quantum Signatures Using Heterodyne Measurements

    NASA Astrophysics Data System (ADS)

    Croal, Callum; Peuntinger, Christian; Heim, Bettina; Khan, Imran; Marquardt, Christoph; Leuchs, Gerd; Wallden, Petros; Andersson, Erika; Korolkova, Natalia

    2016-09-01

    Digital signatures guarantee the authorship of electronic communications. Currently used "classical" signature schemes rely on unproven computational assumptions for security, while quantum signatures rely only on the laws of quantum mechanics to sign a classical message. Previous quantum signature schemes have used unambiguous quantum measurements. Such measurements, however, sometimes give no result, reducing the efficiency of the protocol. Here, we instead use heterodyne detection, which always gives a result, although there is always some uncertainty. We experimentally demonstrate feasibility in a real environment by distributing signature states through a noisy 1.6 km free-space channel. Our results show that continuous-variable heterodyne detection improves the signature rate for this type of scheme and therefore represents an interesting direction in the search for practical quantum signature schemes. For transmission values ranging from 100% to 10%, but otherwise assuming an ideal implementation with no other imperfections, the signature length is shorter by a factor of 2 to 10. As compared with previous relevant experimental realizations, the signature length in this implementation is several orders of magnitude shorter.

  2. Effective side length formula for resonant frequency of equilateral triangular microstrip antenna

    NASA Astrophysics Data System (ADS)

    Guney, Kerim; Kurt, Erhan

    2016-02-01

    A novel and accurate expression is obtained by employing the differential evolution algorithm for the effective side length (ESL) of the equilateral triangular microstrip antenna (ETMA). This useful formula allows the antenna engineers to accurately calculate the ESL of the ETMA. The computed resonant frequencies (RFs) show very good agreement with the experimental RFs when this accurate ESL formula is utilised for the computation of the RFs for the first five modes.

  3. Computer game as a tool for training the identification of phonemic length.

    PubMed

    Pennala, Riitta; Richardson, Ulla; Ylinen, Sari; Lyytinen, Heikki; Martin, Maisa

    2014-12-01

    Computer-assisted training of Finnish phonemic length was conducted with 7-year-old Russian-speaking second-language learners of Finnish. Phonemic length plays a different role in these two languages. The training included game activities with two- and three-syllable word and pseudo-word minimal pairs with prototypical vowel durations. The lowest accuracy scores were recorded for two-syllable words. Accuracy scores were higher for the minimal pairs with larger rather than smaller differences in duration. Accuracy scores were lower for long duration than for short duration. The ability to identify quantity degree was generalized to stimuli used in the identification test in two of the children. Ideas for improving the game are introduced.

  4. Adaptive frequency-domain equalization for the transmission of the fundamental mode in a few-mode fiber.

    PubMed

    Bai, Neng; Xia, Cen; Li, Guifang

    2012-10-08

    We propose and experimentally demonstrate single-carrier adaptive frequency-domain equalization (SC-FDE) to mitigate multipath interference (MPI) for the transmission of the fundamental mode in a few-mode fiber. The FDE approach reduces computational complexity significantly compared to the time-domain equalization (TDE) approach while maintaining the same performance. Both FDE and TDE methods are evaluated by simulating long-haul fundamental-mode transmission using a few-mode fiber. For the fundamental mode operation, the required tap length of the equalizer depends on the differential mode group delay (DMGD) of a single span rather than DMGD of the entire link.

  5. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    NASA Astrophysics Data System (ADS)

    Stupakov, Gennady; Zhou, Demin

    2016-04-01

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.

  6. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stupakov, Gennady; Zhou, Demin

    2016-04-21

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.

  7. Exactly energy conserving semi-implicit particle in cell formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less

  8. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    PubMed

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  9. Reduced-order prediction of rogue waves in two-dimensional deep-water waves

    NASA Astrophysics Data System (ADS)

    Farazmand, Mohammad; Sapsis, Themistoklis P.

    2017-07-01

    We consider the problem of large wave prediction in two-dimensional water waves. Such waves form due to the synergistic effect of dispersive mixing of smaller wave groups and the action of localized nonlinear wave interactions that leads to focusing. Instead of a direct simulation approach, we rely on the decomposition of the wave field into a discrete set of localized wave groups with optimal length scales and amplitudes. Due to the short-term character of the prediction, these wave groups do not interact and therefore their dynamics can be characterized individually. Using direct numerical simulations of the governing envelope equations we precompute the expected maximum elevation for each of those wave groups. The combination of the wave field decomposition algorithm, which provides information about the statistics of the system, and the precomputed map for the expected wave group elevation, which encodes dynamical information, allows (i) for understanding of how the probability of occurrence of rogue waves changes as the spectrum parameters vary, (ii) the computation of a critical length scale characterizing wave groups with high probability of evolving to rogue waves, and (iii) the formulation of a robust and parsimonious reduced-order prediction scheme for large waves. We assess the validity of this scheme in several cases of ocean wave spectra.

  10. Rocket Combustion Modelling Test Case RCM-3. Numerical Calculation of MASCOTTE 60 bar Case with THESEE

    DTIC Science & Technology

    2001-03-01

    flame length is about 230 mm. Figure 10 shows three characteristic structures of a cryogenic flame : "* A first expansion cone of length L1 = 15xDlox...correctly represented. However, the computed flame length is longer than the experimental data. This phenomenon is due to the droplets injection

  11. Modeling wildland fire containment with uncertain flame length and fireline width

    Treesearch

    Romain Mees; David Strauss; Richard Chase

    1993-01-01

    We describe a mathematical model for the probability that a fireline succeeds in containing a fire. The probability increases as the fireline width increases, and also as the fire's flame length decreases. More interestingly, uncertainties in width and flame length affect the computed containment probabilities, and can thus indirectly affect the optimum allocation...

  12. Application of Pinniped Vibrissae to Aeropropulsion

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram (Principal Investigator); Ameri, Ali; Poinsatte, Phil; Thurman, Doug; Wroblewski, Adam; Snyder, Chris

    2015-01-01

    Vibrissae of Phoca Vitulina (Harbor Seal) and Mirounga Angustirostris (Elephant Seal) possess undulations along their length. Harbor Seal Vibrissae were shown to reduce vortex induced vibrations and reduce drag compared to appropriately scaled cylinders and ellipses. Samples of Harbor Seal vibrissae, Elephant Seal vibrissae and California Sea Lion vibrissae were collected from the Marine Mammal Center in California. CT scanning, microscopy and 3D scanning techniques were utilized to characterize the whiskers. Computational fluid dynamics simulations of the whiskers were carried out to compare them to an ellipse and a cylinder. Leading edge parameters from the whiskers were used to create a 3D profile based on a modern power turbine blade. The NASA SW-2 facility was used to perform wind tunnel cascade testing on the 'Seal Blades'. Computational Fluid Dynamics simulations were used to study incidence angles from -37 to +10 degrees on the aerodynamic performance of the Seal Blade. The tests and simulations were conducted at a Reynolds number of 100,000. The Seal Blades showed consistent performance improvements over the baseline configuration. It was determined that a fuel burn reduction of approximately 5 could be achieved for a fixed wing aircraft. Noise reduction potential is also explored.

  13. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  14. Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hales, J. D.; Tonks, M. R.; Chockalingam, K.

    2015-03-01

    Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less

  15. Laboratory modeling and analysis of aircraft-lightning interactions

    NASA Technical Reports Server (NTRS)

    Turner, C. D.; Trost, T. F.

    1982-01-01

    Modeling studies of the interaction of a delta wing aircraft with direct lightning strikes were carried out using an approximate scale model of an F-106B. The model, which is three feet in length, is subjected to direct injection of fast current pulses supplied by wires, which simulate the lightning channel and are attached at various locations on the model. Measurements are made of the resulting transient electromagnetic fields using time derivative sensors. The sensor outputs are sampled and digitized by computer. The noise level is reduced by averaging the sensor output from ten input pulses at each sample time. Computer analysis of the measured fields includes Fourier transformation and the computation of transfer functions for the model. Prony analysis is also used to determine the natural frequencies of the model. Comparisons of model natural frequencies extracted by Prony analysis with those for in flight direct strike data usually show lower damping in the in flight case. This is indicative of either a lightning channel with a higher impedance than the wires on the model, only one attachment point, or short streamers instead of a long channel.

  16. Unilateral total hip replacement patients with symptomatic leg length inequality have abnormal hip biomechanics during walking.

    PubMed

    Li, Junyan; McWilliams, Anthony B; Jin, Zhongmin; Fisher, John; Stone, Martin H; Redmond, Anthony C; Stewart, Todd D

    2015-06-01

    Symptomatic leg length inequality accounts for 8.7% of total hip replacement related claims made against the UK National Health Service Litigation authority. It has not been established whether symptomatic leg length inequality patients following total hip replacement have abnormal hip kinetics during gait. Hip kinetics in 15 unilateral total hip replacement patients with symptomatic leg length inequality during gait was determined through multibody dynamics and compared to 15 native hip healthy controls and 15 'successful' asymptomatic unilateral total hip replacement patients. More significant differences from normal were found in symptomatic leg length inequality patients than in asymptomatic total hip replacement patients. The leg length inequality patients had altered functions defined by lower gait velocity, reduced stride length, reduced ground reaction force, decreased hip range of motion, reduced hip moment and less dynamic hip force with a 24% lower heel-strike peak, 66% higher mid-stance trough and 37% lower toe-off peak. Greater asymmetry in hip contact force was also observed in leg length inequality patients. These gait adaptions may affect the function of the implant and other healthy joints in symptomatic leg length inequality patients. This study provides important information for the musculoskeletal function and rehabilitation of symptomatic leg length inequality patients. Copyright © 2015. Published by Elsevier Ltd.

  17. Optimizing X-ray mirror thermal performance using matched profile cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin; Cocco, Daniele; Kelez, Nicholas

    2015-08-07

    To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less

  18. Orbitals for classical arbitrary anisotropic colloidal potentials

    NASA Astrophysics Data System (ADS)

    Girard, Martin; Nguyen, Trung Dac; de la Cruz, Monica Olvera

    2017-11-01

    Coarse-grained potentials are ubiquitous in mesoscale simulations. While various methods to compute effective interactions for spherically symmetric particles exist, anisotropic interactions are seldom used, due to their complexity. Here we describe a general formulation, based on a spatial decomposition of the density fields around the particles, akin to atomic orbitals. We show that anisotropic potentials can be efficiently computed in numerical simulations using Fourier-based methods. We validate the field formulation and characterize its computational efficiency with a system of colloids that have Gaussian surface charge distributions. We also investigate the phase behavior of charged Janus colloids immersed in screened media, with screening lengths comparable to the colloid size. The system shows rich behaviors, exhibiting vapor, liquid, gel, and crystalline morphologies, depending on temperature and screening length. The crystalline phase only appears for symmetric Janus particles. For very short screening lengths, the system undergoes a direct transition from a vapor to a crystal on cooling; while, for longer screening lengths, a vapor-liquid-crystal transition is observed. The proposed formulation can be extended to model force fields that are time or orientation dependent, such as those in systems of polymer-grafted particles and magnetic colloids.

  19. Record length requirement of long-range dependent teletraffic

    NASA Astrophysics Data System (ADS)

    Li, Ming

    2017-04-01

    This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.

  20. Interactions between pool geometry and hydraulics

    USGS Publications Warehouse

    Thompson, Douglas M.; Nelson, Jonathan M.; Wohl, Ellen E.

    1998-01-01

    An experimental and computational research approach was used to determine interactions between pool geometry and hydraulics. A 20-m-long, 1.8-m-wide flume was used to investigate the effect of four different geometric aspects of pool shape on flow velocity. Plywood sections were used to systematically alter constriction width, pool depth, pool length, and pool exit-slope gradient, each at two separate levels. Using the resulting 16 unique geometries with measured pool velocities in four-way factorial analyses produced an empirical assessment of the role of the four geometric aspects on the pool flow patterns and hence the stability of the pool. To complement the conclusions of these analyses, a two-dimensional computational flow model was used to investigate the relationships between pool geometry and flow patterns over a wider range of conditions. Both experimental and computational results show that constriction and depth effects dominate in the jet section of the pool and that pool length exhibits an increasing effect within the recirculating-eddy system. The pool exit slope appears to force flow reattachment. Pool length controls recirculating-eddy length and vena contracta strength. In turn, the vena contracta and recirculating eddy control velocities throughout the pool.

  1. Numerical prediction of algae cell mixing feature in raceway ponds using particle tracing methods.

    PubMed

    Ali, Haider; Cheema, Taqi A; Yoon, Ho-Sung; Do, Younghae; Park, Cheol W

    2015-02-01

    In the present study, a novel technique, which involves numerical computation of the mixing length of algae particles in raceway ponds, was used to evaluate the mixing process. A value of mixing length that is higher than the maximum streamwise distance (MSD) of algae cells indicates that the cells experienced an adequate turbulent mixing in the pond. A coupling methodology was adapted to map the pulsating effects of a 2D paddle wheel on a 3D raceway pond in this study. The turbulent mixing was examined based on the computations of mixing length, residence time, and algae cell distribution in the pond. The results revealed that the use of particle tracing methodology is an improved approach to define the mixing phenomenon more effectively. Moreover, the algae cell distribution aided in identifying the degree of mixing in terms of mixing length and residence time. © 2014 Wiley Periodicals, Inc.

  2. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  3. Method and System for Determining Relative Displacement and Heading for Navigation

    NASA Technical Reports Server (NTRS)

    Sheikh, Suneel Ismail (Inventor); Pines, Darryll J. (Inventor); Conroy, Joseph Kim (Inventor); Spiridonov, Timofey N. (Inventor)

    2015-01-01

    A system and method for determining a location of a mobile object is provided. The system determines the location of the mobile object by determining distances between a plurality of sensors provided on a first and second movable parts of the mobile object. A stride length, heading, and separation distance between the first and second movable parts are computed based on the determined distances and the location of the mobile object is determined based on the computed stride length, heading, and separation distance.

  4. Software For Computer-Security Audits

    NASA Technical Reports Server (NTRS)

    Arndt, Kate; Lonsford, Emily

    1994-01-01

    Information relevant to potential breaches of security gathered efficiently. Automated Auditing Tools for VAX/VMS program includes following automated software tools performing noted tasks: Privileged ID Identification, program identifies users and their privileges to circumvent existing computer security measures; Critical File Protection, critical files not properly protected identified; Inactive ID Identification, identifications of users no longer in use found; Password Lifetime Review, maximum lifetimes of passwords of all identifications determined; and Password Length Review, minimum allowed length of passwords of all identifications determined. Written in DEC VAX DCL language.

  5. A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.

    PubMed

    Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi

    2015-12-01

    Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by pairs. Further, in view of the dynamic characteristics of the changing data, we give the concept of influence degree to measure the coincidence of the current BN with new data, and then propose the corresponding two-pass MapReduce-based algorithms for BNs incremental learning. Experimental results show the efficiency, scalability, and effectiveness of our methods.

  6. A validation of the fibre orientation and fibre length attrition prediction for long fibre-reinforced thermoplastics

    NASA Astrophysics Data System (ADS)

    Hopmann, Ch.; Weber, M.; van Haag, J.; Schöngart, M.

    2015-05-01

    To improve the mechanical performance of polymeric parts, fibre reinforcement has established in industrial applications during the last decades. Next to the widely used Short Fibre-reinforced Thermoplastics (SFT) the use of Long Fibre-reinforced Thermoplastics (LFT) is increasingly growing. Especially for non-polar polymeric matrices like polypropylene (PP), longer fibres can significantly improve the mechanical performance. As with every kind of discontinuous fibre reinforcement the fibre orientations (FO) show a high impact on the mechanical properties. On the contrary to SFT where the local fibre length distribution (FLD) can be often neglected, for LFT the FLD show a high impact on the material's properties and has to be taken into account in equal measure to the FOD. Recently numerical models are available in commercial filling simulation software and allow predicting both the local FOD and FLD in LFT parts. The aim of this paper is to compare i.) the FOD results and ii) the FLD results from available orientation- and fibre length attrition-models to those obtained from experimental data. The investigations are conducted by the use of different injection moulded specimens made from long glass fibre reinforced PP. In order to determine the FOD, selected part sections are examined by means of Computed Tomographic (CT) analyses. The fully three dimensional measurement of the FOD is then performed by digital image processing using grey scale correlation. The FLD results are also obtained by using digital image processing after a thermal pyrolytic separation of the polymeric matrix from the fibres. Further the FOD and the FLD are predicted by using a reduced strain closure (RSC) as well as an anisotropic rotary diffusion - reduced strain closure model (ARD-RSC) and Phelps-Tucker fibre length attrition model implemented in the commercial filling software Moldflow, Autodesk Inc., San Rafael, CA, USA.

  7. A validation of the fibre orientation and fibre length attrition prediction for long fibre-reinforced thermoplastics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopmann, Ch.; Weber, M.; Haag, J. van

    2015-05-22

    To improve the mechanical performance of polymeric parts, fibre reinforcement has established in industrial applications during the last decades. Next to the widely used Short Fibre-reinforced Thermoplastics (SFT) the use of Long Fibre-reinforced Thermoplastics (LFT) is increasingly growing. Especially for non-polar polymeric matrices like polypropylene (PP), longer fibres can significantly improve the mechanical performance. As with every kind of discontinuous fibre reinforcement the fibre orientations (FO) show a high impact on the mechanical properties. On the contrary to SFT where the local fibre length distribution (FLD) can be often neglected, for LFT the FLD show a high impact on themore » material’s properties and has to be taken into account in equal measure to the FOD. Recently numerical models are available in commercial filling simulation software and allow predicting both the local FOD and FLD in LFT parts. The aim of this paper is to compare i.) the FOD results and ii) the FLD results from available orientation- and fibre length attrition-models to those obtained from experimental data. The investigations are conducted by the use of different injection moulded specimens made from long glass fibre reinforced PP. In order to determine the FOD, selected part sections are examined by means of Computed Tomographic (CT) analyses. The fully three dimensional measurement of the FOD is then performed by digital image processing using grey scale correlation. The FLD results are also obtained by using digital image processing after a thermal pyrolytic separation of the polymeric matrix from the fibres. Further the FOD and the FLD are predicted by using a reduced strain closure (RSC) as well as an anisotropic rotary diffusion - reduced strain closure model (ARD-RSC) and Phelps-Tucker fibre length attrition model implemented in the commercial filling software Moldflow, Autodesk Inc., San Rafael, CA, USA.« less

  8. Analytic Energy Gradients for Variational Two-Electron Reduced-Density-Matrix-Driven Complete Active Space Self-Consistent Field Theory.

    PubMed

    Maradzike, Elvis; Gidofalvi, Gergely; Turney, Justin M; Schaefer, Henry F; DePrince, A Eugene

    2017-09-12

    Analytic energy gradients are presented for a variational two-electron reduced-density-matrix (2-RDM)-driven complete active space self-consistent field (CASSCF) method. The active-space 2-RDM is determined using a semidefinite programing (SDP) algorithm built upon an augmented Lagrangian formalism. Expressions for analytic gradients are simplified by the fact that the Lagrangian is stationary with respect to variations in both the primal and the dual solutions to the SDP problem. Orbital response contributions to the gradient are identical to those that arise in conventional CASSCF methods in which the electronic structure of the active space is described by a full configuration interaction (CI) wave function. We explore the relative performance of variational 2-RDM (v2RDM)- and CI-driven CASSCF for the equilibrium geometries of 20 small molecules. When enforcing two-particle N-representability conditions, full-valence v2RDM-CASSCF-optimized bond lengths display a mean unsigned error of 0.0060 Å and a maximum unsigned error of 0.0265 Å, relative to those obtained from full-valence CI-CASSCF. When enforcing partial three-particle N-representability conditions, the mean and maximum unsigned errors are reduced to only 0.0006 and 0.0054 Å, respectively. For these same molecules, full-valence v2RDM-CASSCF bond lengths computed in the cc-pVQZ basis set deviate from experimentally determined ones on average by 0.017 and 0.011 Å when enforcing two- and three-particle conditions, respectively, whereas CI-CASSCF displays an average deviation of 0.010 Å. The v2RDM-CASSCF approach with two-particle conditions is also applied to the equilibrium geometry of pentacene; optimized bond lengths deviate from those derived from experiment, on average, by 0.015 Å when using a cc-pVDZ basis set and a (22e,22o) active space.

  9. A nomograph for the computation of the growth of fish from scale measurements

    USGS Publications Warehouse

    Hile, Ralph

    1950-01-01

    Directions are given for the construction and operation of a nomograph that can be employed for the computation of the growth of fish from scale measurements regardless of the nature of the body-scale relationship, so long as that relationship is known. The essential feature of the nomograph that makes rapid calculations possible is a ruler on which the graduations are in terms of length with the distance of each length graduation from the O graduation equal to the corresponding theoretical scale measurement. The chief advantage of the nomograph lies in the fact that the calculation of the lengths for all years of life of an individual fish requires only one setting of the single movable part.

  10. Comparison of computer-assisted surgery with conventional technique for the treatment of axial distal phalanx fractures in horses: an in vitro study.

    PubMed

    Andritzky, Juliane; Rossol, Melanie; Lischer, Christoph; Auer, Joerg A

    2005-01-01

    To compare the precision obtained with computer-assisted screw insertion for treatment of mid-sagittal articular fractures of the distal phalanx (P3) with results achieved with a conventional technique. In vitro experimental study. Thirty-two cadaveric equine limbs. Four groups of 8 limbs were studied. Either 1 or 2 screws were inserted perpendicular to an imaginary axial fracture of P3 using computer-assisted surgery (CAS) or conventional technique. Screw insertion time, predetermined screw length, inserted screw length, fit of the screw, and errors in placement were recorded. CAS technique took 15-20 minutes longer but resulted in greater precision of screw length and placement compared with the conventional technique. Improved precision in screw insertion with CAS makes insertion of 2 screws possible for repair of mid-sagittal P3 fractures. CAS although expensive improves precision in screw insertion into P3 and consequently should yield improved clinical outcome.

  11. Computer analysis of three-dimensional morphological characteristics of the bile duct

    NASA Astrophysics Data System (ADS)

    Ma, Jinyuan; Chen, Houjin; Peng, Yahui; Shang, Hua

    2017-01-01

    In this paper, a computer image-processing algorithm for analyzing the morphological characteristics of bile ducts in Magnetic Resonance Cholangiopancreatography (MRCP) images was proposed. The algorithm consisted of mathematical morphology methods including erosion, closing and skeletonization, and a spline curve fitting method to obtain the length and curvature of the center line of the bile duct. Of 10 cases, the average length of the bile duct was 14.56 cm. The maximum curvature was in the range of 0.111 2.339. These experimental results show that using the computer image-processing algorithm to assess the morphological characteristics of the bile duct is feasible and further research is needed to evaluate its potential clinical values.

  12. The influence of uncemented femoral stem length and design on its primary stability: a finite element analysis.

    PubMed

    Reimeringer, M; Nuño, N; Desmarais-Trépanier, C; Lavigne, M; Vendittoli, P A

    2013-01-01

    One of the crucial factors for short- and long-term clinical success of total hip arthroplasty cementless implants is primary stability. Indeed, motion at the bone-implant interface above 40 μm leads to partial bone ingrowth, while motion exceeding 150 μm completely inhibits bone ingrowth. The aim of this study was to investigate the effect of two cementless femoral stem designs with different lengths on the primary stability. A finite element model of a composite Sawbones(®) fourth generation, implanted with five lengths of the straight prosthesis design and four lengths of the curved prosthesis design, was loaded with hip joint and abductor forces representing two physiological activities: fast walking and stair climbing. We found that reducing the straight stem length from 146 to 54 mm increased the average micromotion from 17 to 52 μm during fast walking, while the peak value increased from 42 to 104 μm. With the curved stem, reducing length from 105 to 54 mm increased the average micromotion from 10 to 29 μm, while the peak value increased from 37 to 101 μm. Similar findings are obtained for stair climbing for both stems. Although the present study showed that femoral stem length as well as stem design directly influences its primary stability, for the two femoral stems tested, length could be reduced substantially without compromising the primary stability. With the aim of minimising surgical invasiveness, newer femoral stem design and currently well performing stems might be used with a reduced length without compromising primary stability and hence, long-term survivorship.

  13. Reduced vision in highly myopic eyes without ocular pathology: the ZOC-BHVI high myopia study.

    PubMed

    Jong, Monica; Sankaridurg, Padmaja; Li, Wayne; Resnikoff, Serge; Naidoo, Kovin; He, Mingguang

    2018-01-01

    The aim was to investigate the relationship of the magnitude of myopia with visual acuity in highly myopic eyes without ocular pathology. Twelve hundred and ninety-two highly myopic eyes (up to -6.00 DS both eyes, no astigmatic cut-off) with no ocular pathology from the ZOC-BHVI high myopia study in China, had cycloplegic refraction, followed by subjective refraction and visual acuities and axial length measurement. Two logistic regression models were undertaken to test the association of age, gender, refractive error, axial length and parental myopia with reduced vision. Mean group age was 19.0 ± 8.6 years; subjective spherical equivalent refractive error was -9.03 ± 2.73 D; objective spherical equivalent refractive error was -8.90 ± 2.60 D and axial length was 27.0 ± 1.3 mm. Using visual acuity, 82.4 per cent had normal vision, 16.0 per cent had mildly reduced vision, 1.2 per cent had moderately reduced vision, 0.3 per cent had severely reduced vision and no subjects were blind. The percentage with reduced vision increased with spherical equivalent to 74.5 per cent from -15.00 to -39.99 D, axial length to 67.7 per cent of eyes from 30.01 to 32.00 mm and age to 22.9 per cent of those 41 years and over. Spherical equivalent and axial length were significantly associated with reduced vision (p < 0.0001). Age and parental myopia were not significantly associated with reduced vision. Gender was significant for one model (p = 0.04). Mildly reduced vision is common in high myopia without ocular pathology and is strongly correlated with greater magnitudes of refractive error and axial length. Better understanding is required to minimise reduced vision in high myopes. © 2017 Optometry Australia.

  14. Required length of guardrails before hazards.

    PubMed

    Tomasch, E; Sinz, W; Hoschopf, H; Gobald, M; Steffan, H; Nadler, B; Nadler, F; Strnad, B; Schneider, F

    2011-11-01

    One way to protect against impacts during run-off-road accidents with infrastructure is the use of guardrails. However, real-world accidents indicate that vehicles can leave the road and end up behind the guardrail. These vehicles have no possibility of returning to the lane. Vehicles often end up behind the guardrail because the length of the guardrails installed before hazards is too short; this can lead to a collision with a shielded hazard. To identify the basic speed for determining the necessary length of guardrails, we analyzed the speed at which vehicles leave the roadway from the ZEDATU (Zentrale Datenbank Tödlicher Unfälle) real-world accidents database. The required length of guardrail was considered the length that reduces vehicle speed at a maximum theoretically possible deceleration of 0.3g behind the barrier based on real-world road departure speed. To determine the desired length of a guardrail ahead of a hazard, we developed a relationship between guardrail length and the speed at which vehicles depart the roadway. If the initial elements are flared away from the carriageway, the required length will be reduced by up to an additional 30% The ZEDATU database analysis showed that extending the current length of guardrails to the evaluated required length would reduce the number of fatalities among occupants of vehicles striking bridge abutments by approximately eight percent. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Searching for discrimination rules in protease proteolytic cleavage activity using genetic programming with a min-max scoring function.

    PubMed

    Yang, Zheng Rong; Thomson, Rebecca; Hodgman, T Charles; Dry, Jonathan; Doyle, Austin K; Narayanan, Ajit; Wu, XiKun

    2003-11-01

    This paper presents an algorithm which is able to extract discriminant rules from oligopeptides for protease proteolytic cleavage activity prediction. The algorithm is developed using genetic programming. Three important components in the algorithm are a min-max scoring function, the reverse Polish notation (RPN) and the use of minimum description length. The min-max scoring function is developed using amino acid similarity matrices for measuring the similarity between an oligopeptide and a rule, which is a complex algebraic equation of amino acids rather than a simple pattern sequence. The Fisher ratio is then calculated on the scoring values using the class label associated with the oligopeptides. The discriminant ability of each rule can therefore be evaluated. The use of RPN makes the evolutionary operations simpler and therefore reduces the computational cost. To prevent overfitting, the concept of minimum description length is used to penalize over-complicated rules. A fitness function is therefore composed of the Fisher ratio and the use of minimum description length for an efficient evolutionary process. In the application to four protease datasets (Trypsin, Factor Xa, Hepatitis C Virus and HIV protease cleavage site prediction), our algorithm is superior to C5, a conventional method for deriving decision trees.

  16. An improved model for whole genome phylogenetic analysis by Fourier transform.

    PubMed

    Yin, Changchuan; Yau, Stephen S-T

    2015-10-07

    DNA sequence similarity comparison is one of the major steps in computational phylogenetic studies. The sequence comparison of closely related DNA sequences and genomes is usually performed by multiple sequence alignments (MSA). While the MSA method is accurate for some types of sequences, it may produce incorrect results when DNA sequences undergone rearrangements as in many bacterial and viral genomes. It is also limited by its computational complexity for comparing large volumes of data. Previously, we proposed an alignment-free method that exploits the full information contents of DNA sequences by Discrete Fourier Transform (DFT), but still with some limitations. Here, we present a significantly improved method for the similarity comparison of DNA sequences by DFT. In this method, we map DNA sequences into 2-dimensional (2D) numerical sequences and then apply DFT to transform the 2D numerical sequences into frequency domain. In the 2D mapping, the nucleotide composition of a DNA sequence is a determinant factor and the 2D mapping reduces the nucleotide composition bias in distance measure, and thus improving the similarity measure of DNA sequences. To compare the DFT power spectra of DNA sequences with different lengths, we propose an improved even scaling algorithm to extend shorter DFT power spectra to the longest length of the underlying sequences. After the DFT power spectra are evenly scaled, the spectra are in the same dimensionality of the Fourier frequency space, then the Euclidean distances of full Fourier power spectra of the DNA sequences are used as the dissimilarity metrics. The improved DFT method, with increased computational performance by 2D numerical representation, can be applicable to any DNA sequences of different length ranges. We assess the accuracy of the improved DFT similarity measure in hierarchical clustering of different DNA sequences including simulated and real datasets. The method yields accurate and reliable phylogenetic trees and demonstrates that the improved DFT dissimilarity measure is an efficient and effective similarity measure of DNA sequences. Due to its high efficiency and accuracy, the proposed DFT similarity measure is successfully applied on phylogenetic analysis for individual genes and large whole bacterial genomes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Comparison of Calculations and Measurements of the Off-Axis Radiation Dose (SI) in Liquid Nitrogen as a Function of Radiation Length.

    DTIC Science & Technology

    1984-12-01

    radiation lengths. The off-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured using thermal luminescent...various path lengths out to 2 radiation lengths. The cff-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured... using thermal luminescent dosimeters (TLD’s). Calculations were performed on a CDC-7600 computer at Los Alamos National Laboratory and measurements

  18. Evidence of MAOA genotype involvement in spatial ability in males.

    PubMed

    Mueller, Sven C; Cornwell, Brian R; Grillon, Christian; Macintyre, Jessica; Gorodetsky, Elena; Goldman, David; Pine, Daniel S; Ernst, Monique

    2014-07-01

    Although the monoamine oxidase-A (MAOA) gene has been linked to spatial learning and memory in animal models, convincing evidence in humans is lacking. Performance on an ecologically-valid, virtual computer-based equivalent of the Morris Water Maze task was compared between 28 healthy males with the low MAOA transcriptional activity and 41 healthy age- and IQ-matched males with the high MAOA transcriptional activity. The results revealed consistently better performance (reduced heading error, shorter path length, and reduced failed trials) for the high MAOA activity individuals relative to the low activity individuals. By comparison, groups did not differ on pre-task variables or strategic measures such as first-move latency. The results provide novel evidence of MAOA gene involvement in human spatial navigation using a virtual analogue of the Morris Water Maze task. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. The length but not the sequence of peptide linker modules exerts the primary influence on the conformations of protein domains in cellulosome multi-enzyme complexes.

    PubMed

    Różycki, Bartosz; Cazade, Pierre-André; O'Mahony, Shane; Thompson, Damien; Cieplak, Marek

    2017-08-16

    Cellulosomes are large multi-protein catalysts produced by various anaerobic microorganisms to efficiently degrade plant cell-wall polysaccharides down into simple sugars. X-ray and physicochemical structural characterisations show that cellulosomes are composed of numerous protein domains that are connected by unstructured polypeptide segments, yet the properties and possible roles of these 'linker' peptides are largely unknown. We have performed coarse-grained and all-atom molecular dynamics computer simulations of a number of cellulosomal linkers of different lengths and compositions. Our data demonstrates that the effective stiffness of the linker peptides, as quantified by the equilibrium fluctuations in the end-to-end distances, depends primarily on the length of the linker and less so on the specific amino acid sequence. The presence of excluded volume - provided by the domains that are connected - dampens the motion of the linker residues and reduces the effective stiffness of the linkers. Simultaneously, the presence of the linkers alters the conformations of the protein domains that are connected. We demonstrate that short, stiff linkers induce significant rearrangements in the folded domains of the mini-cellulosome composed of endoglucanase Cel8A in complex with scaffoldin ScafT (Cel8A-ScafT) of Clostridium thermocellum as well as in a two-cohesin system derived from the scaffoldin ScaB of Acetivibrio cellulolyticus. We give experimentally testable predictions on structural changes in protein domains that depend on the length of linkers.

  20. Numerical evaluation of gas core length in free surface vortices

    NASA Astrophysics Data System (ADS)

    Cristofano, L.; Nobili, M.; Caruso, G.

    2014-11-01

    The formation and evolution of free surface vortices represent an important topic in many hydraulic intakes, since strong whirlpools introduce swirl flow at the intake, and could cause entrainment of floating matters and gas. In particular, gas entrainment phenomena are an important safety issue for Sodium cooled Fast Reactors, because the introduction of gas bubbles within the core causes dangerous reactivity fluctuation. In this paper, a numerical evaluation of the gas core length in free surface vortices is presented, according to two different approaches. In the first one, a prediction method, developed by the Japanese researcher Sakai and his team, has been applied. This method is based on the Burgers vortex model, and it is able to estimate the gas core length of a free surface vortex starting from two parameters calculated with single-phase CFD simulations. The two parameters are the circulation and the downward velocity gradient. The other approach consists in performing a two-phase CFD simulation of a free surface vortex, in order to numerically reproduce the gas- liquid interface deformation. Mapped convergent mesh is used to reduce numerical error and a VOF (Volume Of Fluid) method was selected to track the gas-liquid interface. Two different turbulence models have been tested and analyzed. Experimental measurements of free surface vortices gas core length have been executed, using optical methods, and numerical results have been compared with experimental measurements. The computational domain and the boundary conditions of the CFD simulations were set consistently with the experimental test conditions.

  1. On high heels and short muscles: A multiscale model for sarcomere loss in the gastrocnemius muscle

    PubMed Central

    Zöllner, Alexander M.; Pok, Jacquelynn M.; McWalter, Emily J.; Gold, Garry E.; Kuhl, Ellen

    2014-01-01

    High heels are a major source of chronic lower limb pain. Yet, more than one third of all women compromise health for looks and wear high heels on a daily basis. Changing from flat footwear to high heels induces chronic muscle shortening associated with discomfort, fatigue, reduced shock absorption, and increased injury risk. However, the long-term effects of high-heeled footwear on the musculoskeletal kinematics of the lower extremities remain poorly understood. Here we create a multiscale computational model for chronic muscle adaptation to characterize the acute and chronic effects of global muscle shortening on local sarcomere lengths. We perform a case study of a healthy female subject and show that raising the heel by 13 cm shortens the gastrocnemius muscle by 5% while the Achilles tendon remains virtually unaffected. Our computational simulation indicates that muscle shortening displays significant regional variations with extreme values of 22% in the central gastrocnemius. Our model suggests that the muscle gradually adjusts to its new functional length by a chronic loss of sarcomeres in series. Sarcomere loss varies significantly across the muscle with an average loss of 9%, virtually no loss at the proximal and distal ends, and a maximum loss of 39% in the central region. These changes reposition the remaining sarcomeres back into their optimal operating regime. Computational modeling of chronic muscle shortening provides a valuable tool to shape our understanding of the underlying mechanisms of muscle adaptation. Our study could open new avenues in orthopedic surgery and enhance treatment for patients with muscle contracture caused by other conditions than high heel wear such as paralysis, muscular atrophy, and muscular dystrophy. PMID:25451524

  2. Reducing Length of Stay, Direct Cost, and Readmissions in Total Joint Arthroplasty Patients With an Outcomes Manager-Led Interprofessional Team.

    PubMed

    Arana, Melissa; Harper, Licia; Qin, Huanying; Mabrey, Jay

    The purpose of this quality improvement project was to determine whether an outcomes manager-led interprofessional team could reduce length of stay and direct cost without increasing 30-day readmission rates in the total joint arthroplasty patient population. The goal was to promote interprofessional relationships combined with collaborative practice to promote coordinated care with improved outcomes. Results from this project showed that length of stay (total hip arthroplasty [THA] reduced by 0.4 days and total knee arthroplasty [TKA] reduced by 0.6 days) and direct cost (THA reduced by $1,020 per case and TKA reduced by $539 per case) were significantly decreased whereas 30-day readmission rates of both populations were not significantly increased.

  3. Energy measurement using flow computers and chromatography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeson, J.

    1995-12-01

    Arkla Pipeline Group (APG), along with most transmission companies, went to electronic flow measurement (EFM) to: (1) Increase resolution and accuracy; (2) Real time correction of flow variables; (3) Increase speed in data retrieval; (4) Reduce capital expenditures; and (5) Reduce operation and maintenance expenditures Prior to EFM, mechanical seven day charts were used which yielded 800 pressure and differential pressure readings. EFM yields 1.2-million readings, a 1500 time improvement in resolution and additional flow representation. The total system accuracy of the EFM system is 0.25 % compared with 2 % for the chart system which gives APG improved accuracy.more » A typical APG electronic measurement system includes a microprocessor-based flow computer, a telemetry communications package, and a gas chromatograph. Live relative density (specific gravity), BTU, CO{sub 2}, and N{sub 2} are updated from the chromatograph to the flow computer every six minutes which provides accurate MMBTU computations. Because the gas contract length has changed from years to monthly and from a majority of direct sales to transports both Arkla and its customers wanted access to actual volumes on a much more timely basis than is allowed with charts. The new electronic system allows volumes and other system data to be retrieved continuously, if EFM is on Supervisory Control and Data Acquisition (SCADA) or daily if on dial up telephone. Previously because of chart integration, information was not available for four to six weeks. EFM costs much less than the combined costs of telemetry transmitters, pressure and differential pressure chart recorders, and temperature chart recorder which it replaces. APG will install this equipment on smaller volume stations at a customers expense. APG requires backup measurement on metering facilities this size. It could be another APG flow computer or chart recorder, or the other companies flow computer or chart recorder.« less

  4. Association between playing computer games and mental and social health among male adolescents in Iran in 2014.

    PubMed

    Mohammadi, Mehrnoosh; RezaeiDehaghani, Abdollah; Mehrabi, Tayebeh; RezaeiDehaghani, Ali

    2016-01-01

    As adolescents spend much time on playing computer games, their mental and social effects should be considered. The present study aimed to investigate the association between playing computer games and the mental and social health among male adolescents in Iran in 2014. This is a cross-sectional study conducted on 210 adolescents selected by multi-stage random sampling. Data were collected by Goldberg and Hillier general health (28 items) and Kiez social health questionnaires. The association was tested by Pearson and Spearman correlation coefficients, one-way analysis of variance (ANOVA), and independent t-test. Computer games related factors such as the location, type, length, the adopted device, and mode of playing games were investigated. Results showed that 58.9% of the subjects played games on a computer alone for 1 h at home. Results also revealed that the subjects had appropriate mental health and 83.2% had moderate social health. Results showed a poor significant association between the length of games and social health (r = -0.15, P = 0.03), the type of games and mental health (r = -0.16, P = 0.01), and the device used in playing games and social health (F = 0.95, P = 0.03). The findings showed that adolescents' mental and social health is negatively associated with their playing computer games. Therefore, to promote their health, educating them about the correct way of playing computer games is essential and their parents and school authorities, including nurses working at schools, should determine its relevant factors such as the type, length, and device used in playing such games.

  5. Improving the Performance of Two-Stage Gas Guns By Adding a Diaphragm in the Pump Tube

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.; Miller, Robert J.

    1995-01-01

    Herein, we study the technique of improving the gun performance by installing a diaphragm in the pump tube of the gun. A CFD study is carried out for the 0.28 in. gun in the Hypervelocity Free Flight Radiation (HFF RAD) range at the NASA Ames Research Center. The normal, full-length pump tube is studied as well as two pump tubes of reduced length (approximately 75% and approximately 33% of the normal length). Significant improvements in performance are calculated to be gained for the reduced length pump tubes upon the addition of the diaphragm. These improvements are identified as reductions in maximum pressures in the pump tube and at the projectile base of approximately 20%, while maintaining the projectile muzzle velocity or as increases in muzzle velocity of approximately 0.5 km/sec while not increasing the maximum pressures in the gun. Also, it is found that both guns with reduced pump tube length (with diaphragms) could maintain the performance of gun with the full length pump tube without diaphragms, whereas the guns with reduced pump tube lengths without diaphragms could not. A five-shot experimental investigation of the pump tube diaphragm technique is carried out for the gun with a pump tube length of 75% normal. The CFD predictions of increased muzzle velocity are borne out by the experimental data. Modest, but useful muzzle velocity increases (2.5 - 6%) are obtained upon the installation of a diaphragm, compared to a benchmark shot without a diaphragm.

  6. A novel potential/viscous flow coupling technique for computing helicopter flow fields

    NASA Technical Reports Server (NTRS)

    Summa, J. Michael; Strash, Daniel J.; Yoo, Sungyul

    1993-01-01

    The primary objective of this work was to demonstrate the feasibility of a new potential/viscous flow coupling procedure for reducing computational effort while maintaining solution accuracy. This closed-loop, overlapped velocity-coupling concept has been developed in a new two-dimensional code, ZAP2D (Zonal Aerodynamics Program - 2D), a three-dimensional code for wing analysis, ZAP3D (Zonal Aerodynamics Program - 3D), and a three-dimensional code for isolated helicopter rotors in hover, ZAPR3D (Zonal Aerodynamics Program for Rotors - 3D). Comparisons with large domain ARC3D solutions and with experimental data for a NACA 0012 airfoil have shown that the required domain size can be reduced to a few tenths of a percent chord for the low Mach and low angle of attack cases and to less than 2-5 chords for the high Mach and high angle of attack cases while maintaining solution accuracies to within a few percent. This represents CPU time reductions by a factor of 2-4 compared with ARC2D. The current ZAP3D calculation for a rectangular plan-form wing of aspect ratio 5 with an outer domain radius of about 1.2 chords represents a speed-up in CPU time over the ARC3D large domain calculation by about a factor of 2.5 while maintaining solution accuracies to within a few percent. A ZAPR3D simulation for a two-bladed rotor in hover with a reduced grid domain of about two chord lengths was able to capture the wake effects and compared accurately with the experimental pressure data. Further development is required in order to substantiate the promise of computational improvements due to the ZAPR3D coupling concept.

  7. Comparison of radiological and morphologic assessments of myocardial bridges.

    PubMed

    Ercakmak, Burcu; Bulut, Elif; Hayran, Mutlu; Kaymaz, Figen; Bilgin, Selma; Hazirolan, Tuncay; Bayramoglu, Alp; Erbil, Mine

    2015-09-01

    In this study we aimed to compare the findings of coronary dual-source computed tomography angiography of myocardial bridges with cadaveric dissections. Forty-one isolated, non-damaged fresh sheep hearts were used in this study. Myocardial bridges of the anterior interventricular branch of the left coronary artery were demonstrated and analyzed by a coronary dual-source computed tomography angiography. Dissections along the left anterior interventricular branch of the left coronary artery were performed by using Zeiss OPMI pico microscope and the length of the bridges were measured. The depths of the myocardial bridges were measured from the stained sections by using the light microscope (Leica DM 6000B). MBs were found in all 41 hearts (100%) during dissections. Dual-source computed tomography angiography successfully detected 87.8% (36 of the 41 hearts) of the myocardial bridges measured on left anterior interventricular branch of left coronary artery. The lengths of the myocardial bridges were found 5-40 and 8-50 mm with dissection and dual-source computed tomography angiography, respectively. And the depths were found 0.7-4.5 mm by dual-source computed tomography angiography and 0.745-4.632 mm morphologically. Comparison of the mean values of the lengths showed statistically significantly higher values (22.0 ± 8.5, 17.7 ± 7.7 mm, p = 0.003) for the dissections. Radiological assessment also effectively discriminated complete bridges from incomplete ones. Our study showed that coronary computed tomography angiography is reliable in evaluating the presence and depth of myocardial bridges.

  8. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  9. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  10. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  11. Drag reduction using wrinkled surfaces in high Reynolds number laminar boundary layer flows

    NASA Astrophysics Data System (ADS)

    Raayai-Ardakani, Shabnam; McKinley, Gareth H.

    2017-09-01

    Inspired by the design of the ribbed structure of shark skin, passive drag reduction methods using stream-wise riblet surfaces have previously been developed and tested over a wide range of flow conditions. Such textures aligned in the flow direction have been shown to be able to reduce skin friction drag by 4%-8%. Here, we explore the effects of periodic sinusoidal riblet surfaces aligned in the flow direction (also known as a "wrinkled" texture) on the evolution of a laminar boundary layer flow. Using numerical analysis with the open source Computational Fluid Dynamics solver OpenFOAM, boundary layer flow over sinusoidal wrinkled plates with a range of wavelength to plate length ratios ( λ / L ), aspect ratios ( 2 A / λ ), and inlet velocities are examined. It is shown that in the laminar boundary layer regime, the riblets are able to retard the viscous flow inside the grooves creating a cushion of stagnant fluid that the high-speed fluid above can partially slide over, thus reducing the shear stress inside the grooves and the total integrated viscous drag force on the plate. Additionally, we explore how the boundary layer thickness, local average shear stress distribution, and total drag force on the wrinkled plate vary with the aspect ratio of the riblets as well as the length of the plate. We show that riblets with an aspect ratio of close to unity lead to the highest reduction in the total drag, and that because of the interplay between the local stress distribution on the plate and stream-wise evolution of the boundary layer the plate has to exceed a critical length to give a net decrease in the total drag force.

  12. Direct Numerical Simulations of a Full Stationary Wind-Turbine Blade

    NASA Astrophysics Data System (ADS)

    Qamar, Adnan; Zhang, Wei; Gao, Wei; Samtaney, Ravi

    2014-11-01

    Direct numerical simulation of flow past a full stationary wind-turbine blade is carried out at Reynolds number, Re = 10,000 placed at 0 and 5 (degree) angle of attack. The study is targeted to create a DNS database for verification of solvers and turbulent models that are utilized in wind-turbine modeling applications. The full blade comprises of a circular cylinder base that is attached to a spanwise varying airfoil cross-section profile (without twist). An overlapping composite grid technique is utilized to perform these DNS computations, which permits block structure in the mapped computational space. Different flow shedding regimes are observed along the blade length. Von-Karman shedding is observed in the cylinder shaft region of the turbine blade. Along the airfoil cross-section of the blade, near body shear layer breakdown is observed. A long tip vortex originates from the blade tip region, which exits the computational plane without being perturbed. Laminar to turbulent flow transition is observed along the blade length. The turbulent fluctuations amplitude decreases along the blade length and the flow remains laminar regime in the vicinity of the blade tip. The Strouhal number is found to decrease monotonously along the blade length. Average lift and drag coefficients are also reported for the cases investigated. Supported by funding under a KAUST OCRF-CRG grant.

  13. Computational estimation of the influence of the main body-to-iliac limb length ratio on the displacement forces acting on an aortic endograft. Theoretical application to Bolton Treovance® Abdominal Stent-Graft.

    PubMed

    Georgakarakos, E; Xenakis, A; Georgiadis, G S; Argyriou, C; Manopoulos, C; Tsangaris, S; Lazarides, M K

    2014-10-01

    The influence of the relative iliac limb length of an endograft (EG) on the displacements forces (DF) predisposing to adverse effects are under-appreciated in the literature. Therefore, we conducted a computational study to estimate the magnitude of the DF acting over an entire reconstructed EG and its counterparts for a range of main body-to-iliac limb length (L1/L2) ratios. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. Accordingly, Fluid Structure Interaction was used to estimate the DF. The total length of the EG was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5. The increase in L1/L2 slightly affected the DF on the EG (ranging from 3.8 to 4.1 N) and its bifurcation (4.0 to 4.6 N). However, the forces exerted at the iliac sites were strongly affected by the L1/L2 values (ranging from 0.9 to 2.2 N), showing a parabolic pattern with a minimum for 0.6 ratio. It is suggested that the hemodynamic effect of the relative limb lengths should not be considered negligible. A high main body-to-iliac limb length ratio seems to favor hemodynamically a low bifurcation but it attenuates the main body-iliac limbs modular stability. Further clinical studies should investigate the relevant value of these findings. The Bolton Treovance(®) device is presented as a representative, improved stent-graft design that takes into account these hemodynamic parameters in order to achieve a promising, improved clinical performance.

  14. An improved method for design of expansion-chamber mufflers with application to an operational helicopter

    NASA Technical Reports Server (NTRS)

    Parrott, T. L.

    1973-01-01

    An improved method for the design of expansion-chamber mufflers is described and applied to the task of reducing exhaust noise generated by a helicopter. The method is an improvement of standard transmission-line theory in that it accounts for the effect of the mean exhaust-gas flow on the acoustic-transmission properties of a muffler system, including the termination boundary condition. The method has been computerized, and the computer program includes an optimization procedure that adjusts muffler component lengths to achieve a minimum specified desired transmission loss over a specified frequency range. A printout of the program is included together with a user-oriented description.

  15. Parameter estimation of extended free-burning electric arc within 1 kA

    NASA Astrophysics Data System (ADS)

    Sun, Qiuqin; Liu, Hao; Wang, Feng; Chen, She; Zhai, Yujia

    2018-05-01

    A long electric arc, as a common phenomenon in the power system, not only damages the electrical equipment but also threatens the safety of the system. In this work, a series of tests on a long electric arc in free air have been conducted. The arc voltage and current data were obtained, and the arc trajectories were captured using a high speed camera. The arc images were digitally processed by means of edge detection, and the length is formulated and achieved. Based on the experimental data, the characteristics of the long arc are discussed. It shows that the arc voltage waveform is close to the square wave with high-frequency components, whereas the current is almost sinusoidal. As the arc length elongates, the arc voltage and the resistance increase sharply. The arc takes a spiral shape with the effect of magnetic forces. The arc length will shorten briefly with the occurrence of the short-circuit phenomenon. Based on the classical Mayr model, the parameters of the long electric arc, including voltage gradient and time constant, with different lengths and current amplitudes are estimated using the linear least-square method. To reduce the computational error, segmentation interpolation is also employed. The results show that the voltage gradient of the long arc is mainly determined by the current amplitude but almost independent of the arc length. However, the time constant is jointly governed by these two variables. The voltage gradient of the arc with the current amplitude at 200-800 A is in the range of 3.9 V/cm-20 V/cm, and the voltage gradient decreases with the increase in current.

  16. Exploration of the psychophysics of a motion displacement hyperacuity stimulus.

    PubMed

    Verdon-Roe, Gay Mary; Westcott, Mark C; Viswanathan, Ananth C; Fitzke, Frederick W; Garway-Heath, David F

    2006-11-01

    To explore the summation properties of a motion-displacement hyperacuity stimulus with respect to stimulus area and luminance, with the goal of applying the results to the development of a motion-displacement test (MDT) for the detection of early glaucoma. A computer-generated line stimulus was presented with displacements randomized between 0 and 40 minutes of arc (min arc). Displacement thresholds (50% seen) were compared for stimuli of equal area but different edge length (orthogonal to the direction of motion) at four retinal locations. Also, MDT thresholds were recorded at five values of Michelson contrast (25%-84%) for each of five line lengths (11-128 min arc) at a single nasal location (-27,3). Frequency-of-seeing (FOS) curves were generated and displacement thresholds and interquartile ranges (IQR, 25%-75% seen) determined by probit analysis. Equivalent displacement thresholds were found for stimuli of equal area but half the edge length. Elevations of thresholds and IQR were demonstrated as line length and contrast were reduced. Equivalent displacement thresholds were also found for stimuli of equivalent energy (stimulus area x [stimulus luminance - background luminance]), in accordance with Ricco's law. There was a linear relationship (slope -0.5) between log MDT threshold and log stimulus energy. Stimulus area, rather than edge length, determined displacement thresholds within the experimental conditions tested. MDT thresholds are linearly related to the square root of the total energy of the stimulus. A new law, the threshold energy-displacement (TED) law, is proposed to apply to MDT summation properties, giving the relationship T = K logE where, T is the MDT threshold, Kis the constant, and E is the stimulus energy.

  17. Finite Element Analysis of Composite Joint Configurations with Gaps and Overlaps

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2014-01-01

    The goal of the current study is to identify scenarios for which thermal and moisture effects become significant in the loading of a composite structure. In the current work, a simple configuration was defined, and material properties were selected. A Fortran routine was created to automate the mesh generation process. The routine was used to create the models for the initial mesh refinement study. A combination of element length and width suitable for further studies was identified. Also, the effect of the overlap length and gap length on computed shear and through-thickness stresses along the bondline of the joints was studied for the mechanical load case. Further, the influence of neighboring gaps and overlaps on these joint stresses was studied and was found to be negligible. The results suggest that for an initial study it is sufficient to focus on one configuration with fixed overlap and gap lengths to study the effects of mechanical, thermal and moisture loading and combinations thereof on computed joint stresses

  18. Energetic and entropic components of the Tolman length for mW and TIP4P/2005 water nanodroplets

    NASA Astrophysics Data System (ADS)

    Joswiak, Mark N.; Do, Ryan; Doherty, Michael F.; Peters, Baron

    2016-11-01

    The surface free energy of a droplet is approximately γ ( R ) = γ ( ∞ ) ( 1 - 2 δ / R ) , with R being the droplet radius and δ being the Tolman length. Here we use the mitosis method to compute δ = - 0.56 ± 0.1 Å at 300 K for mW water, indicating that γ ( R ) increases as the droplet size decreases. The computed Tolman length agrees quite well with a previous study of TIP4P/2005 water. We also decompose the size-dependent surface free energy into energetic and entropic contributions for the mW and TIP4P/2005 force fields. Despite having similar Tolman lengths, the energy-entropy decompositions are very different for the two force fields. We discuss critical assumptions which lead to these findings and their relation to experiments on the nucleation of water droplets. We also discuss surface broken bonds and structural correlations as possible explanations for the energetic and entropic contributions.

  19. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  20. Change of aortic length after closing-opening wedge osteotomy for patients with ankylosing spondylitis with thoracolumbar kyphosis: a computed tomographic study.

    PubMed

    Ji, Ming-Liang; Qian, Bang-ping; Qiu, Yong; Wang, Bin; Zhu, Ze-zhang; Yu, Yang; Jiang, Jun

    2013-10-15

    A computed tomographic study. To investigate the change in aortic length in patients with ankylosing spondylitis (AS) with thoracolumbar kyphosis after closing-opening wedge osteotomy (COWO). Several previous studies reported that COWO can effectively correct severe thoracolumbar kyphosis caused by AS. However, one disadvantage of COWO is elongation of the aorta, which increases the risk of aortic injury. To date, no studies have analyzed the alteration in aortic length in patients with AS undergoing COWO for thoracolumbar kyphosis. A total of 21 consecutive patients with AS with a mean age of 38.9 years undergoing COWO for the correction of thoracolumbar kyphosis were retrospectively studied. Radiographical measurements included global kyphosis, thoracic kyphosis, lumbar lordosis, angle of fusion levels, local kyphosis, and anterior height of the osteotomized vertebra. The computed tomographic scans of the spine were used to measure the aortic diameter (at the site of the osteotomy) and length (the length between the superior endplate of the upper instrumented vertebra and the inferior endplate of L4). The aortic length increased by an average of 2.2 cm postoperatively. Significant changes in global kyphosis, local kyphosis, angle of fusion levels, lumbar lordosis, anterior height of the osteotomized vertebra, and aortic diameter at the site of the osteotomy were observed (P < 0.01). Significant correlation was noted between aortic length and changes in global kyphosis (r = 0.525, P = 0.015), local kyphosis (r = 0.654, P = 0.001), angle of fusion levels (r = 0.634, P = 0.002), and lumbar lordosis (r = 0.538, P = 0.012). Aortic lengthening after COWO for correction of kyphosis was quantitatively confirmed by this study. Spine surgeons should be aware of the potential risk for the development of aortic injury in patients with AS undergoing COWO for the correction of thoracolumbar kyphosis. 4.

  1. Evaluation of Eyeball and Orbit in Relation to Gender and Age.

    PubMed

    Özer, Cenk Murat; Öz, Ibrahim Ilker; Şerifoğlu, Ismail; Büyükuysal, Mustafa Çağatay; Barut, Çağatay

    2016-11-01

    The orbital aperture is the entrance to the orbit in which most important visual structures such as the eyeball and the optic nerve are found. It is vital not only for the visual system but also for the evaluation and recognition of the face. Eyeball volume is essential for diagnosing microphthalmos or buphthalmos in several eye disorders. Knowing the length of the optic nerve is necessary in selecting the right instruments for enucleation. Therefore, the aim of this study was to evaluate eyeball volume, orbital aperture, and optic nerve dimensions for a morphological description in a Turkish population sample according to gender and body side.Paranasal sinus computed tomography (CT) scans of 198 individuals (83 females, 115 males) aged between 5 and 74 years were evaluated retrospectively. The dimensions of orbital aperture, axial length and volume of eyeball, and diameter and length of the intraorbital part of the optic nerve were measured. Computed tomography examinations were performed on an Activion 16 CT Scanner (Toshiba Medical Systems, 2008 Japan). The CT measurements were calculated by using OsiriX software on a personal computer. All parameters were evaluated according to gender and right/left sides. A statistically significant difference between genders was found with respect to axial length of eyeball, optic nerve diameter, dimensions of orbital aperture on both sides, and right optic nerve length. Furthermore, certain statistically significant side differences were also found. There were statistically significant correlations between age and the axial length of the eyeball, optic nerve diameter, and the transverse length of the orbital aperture on both sides for the whole study group.In this study we determined certain morphometric parameters of the orbit. These outcomes may be helpful in developing a database to determine normal orbit values for the Turkish population so that quantitative assessment of orbital disease and orbital deformities will be evaluated both for preoperative planning and for assessing postoperative outcomes.

  2. Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements

    NASA Astrophysics Data System (ADS)

    Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.

    2000-11-01

    In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.

  3. Reduction in patient burdens with graphical computerized adaptive testing on the ADL scale: tool development and simulation.

    PubMed

    Chien, Tsair-Wei; Wu, Hing-Man; Wang, Weng-Chung; Castillo, Roberto Vasquez; Chou, Willy

    2009-05-05

    The aim of this study was to verify the effectiveness and efficacy of saving time and reducing burden for patients, nurses, and even occupational therapists through computer adaptive testing (CAT). Based on an item bank of the Barthel Index (BI) and the Frenchay Activities Index (FAI) for assessing comprehensive activities of daily living (ADL) function in stroke patients, we developed a visual basic application (VBA)-Excel CAT module, and (1) investigated whether the averaged test length via CAT is shorter than that of the traditional all-item-answered non-adaptive testing (NAT) approach through simulation, (2) illustrated the CAT multimedia on a tablet PC showing data collection and response errors of ADL clinical functional measures in stroke patients, and (3) demonstrated the quality control of endorsing scale with fit statistics to detect responding errors, which will be further immediately reconfirmed by technicians once patient ends the CAT assessment. The results show that endorsed items could be shorter on CAT (M = 13.42) than on NAT (M = 23) at 41.64% efficiency in test length. However, averaged ability estimations reveal insignificant differences between CAT and NAT. This study found that mobile nursing services, placed at the bedsides of patients could, through the programmed VBA-Excel CAT module, reduce the burden to patients and save time, more so than the traditional NAT paper-and-pencil testing appraisals.

  4. Method and apparatus of assessing down-hole drilling conditions

    DOEpatents

    Hall, David R [Provo, UT; Pixton, David S [Lehl, UT; Johnson, Monte L [Orem, UT; Bartholomew, David B [Springville, UT; Fox, Joe [Spanish Fork, UT

    2007-04-24

    A method and apparatus for use in assessing down-hole drilling conditions are disclosed. The apparatus includes a drill string, a plurality of sensors, a computing device, and a down-hole network. The sensors are distributed along the length of the drill string and are capable of sensing localized down-hole conditions while drilling. The computing device is coupled to at least one sensor of the plurality of sensors. The data is transmitted from the sensors to the computing device over the down-hole network. The computing device analyzes data output by the sensors and representative of the sensed localized conditions to assess the down-hole drilling conditions. The method includes sensing localized drilling conditions at a plurality of points distributed along the length of a drill string during drilling operations; transmitting data representative of the sensed localized conditions to a predetermined location; and analyzing the transmitted data to assess the down-hole drilling conditions.

  5. Design of nucleic acid strands with long low-barrier folding pathways.

    PubMed

    Condon, Anne; Kirkpatrick, Bonnie; Maňuch, Ján

    2017-01-01

    A major goal of natural computing is to design biomolecules, such as nucleic acid sequences, that can be used to perform computations. We design sequences of nucleic acids that are "guaranteed" to have long folding pathways relative to their length. This particular sequences with high probability follow low-barrier folding pathways that visit a large number of distinct structures. Long folding pathways are interesting, because they demonstrate that natural computing can potentially support long and complex computations. Formally, we provide the first scalable designs of molecules whose low-barrier folding pathways, with respect to a simple, stacked pair energy model, grow superlinearly with the molecule length, but for which all significantly shorter alternative folding pathways have an energy barrier that is [Formula: see text] times that of the low-barrier pathway for any [Formula: see text] and a sufficiently long sequence.

  6. A detailed experimental study of a DNA computer with two endonucleases.

    PubMed

    Sakowski, Sebastian; Krasiński, Tadeusz; Sarnik, Joanna; Blasiak, Janusz; Waldmajer, Jacek; Poplawski, Tomasz

    2017-07-14

    Great advances in biotechnology have allowed the construction of a computer from DNA. One of the proposed solutions is a biomolecular finite automaton, a simple two-state DNA computer without memory, which was presented by Ehud Shapiro's group at the Weizmann Institute of Science. The main problem with this computer, in which biomolecules carry out logical operations, is its complexity - increasing the number of states of biomolecular automata. In this study, we constructed (in laboratory conditions) a six-state DNA computer that uses two endonucleases (e.g. AcuI and BbvI) and a ligase. We have presented a detailed experimental verification of its feasibility. We described the effect of the number of states, the length of input data, and the nondeterminism on the computing process. We also tested different automata (with three, four, and six states) running on various accepted input words of different lengths such as ab, aab, aaab, ababa, and of an unaccepted word ba. Moreover, this article presents the reaction optimization and the methods of eliminating certain biochemical problems occurring in the implementation of a biomolecular DNA automaton based on two endonucleases.

  7. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  8. Determination of dosimetric quantities in pediatric abdominal computed tomography scans*

    PubMed Central

    Jornada, Tiago da Silva; da Silva, Teógenes Augusto

    2014-01-01

    Objective Aiming at contributing to the knowledge on doses in computed tomography (CT), this study has the objective of determining dosimetric quantities associated with pediatric abdominal CT scans, comparing the data with diagnostic reference levels (DRL). Materials and methods The study was developed with a Toshiba Asteion single-slice CT scanner and a GE BrightSpeed multi-slice CT unit in two hospitals. Measurements were performed with a pencil-type ionization chamber and a 16 cm-diameter polymethylmethacrylate trunk phantom. Results No significant difference was observed in the values for weighted air kerma index (CW), but the differences were relevant in values for volumetric air kerma index (CVOL), air kerma-length product (PKL,CT) and effective dose. Conclusion Only the CW values were lower than the DRL, suggesting that dose optimization might not be necessary. However, PKL,CT and effective dose values stressed that there still is room for reducing pediatric radiation doses. The present study emphasizes the importance of determining all dosimetric quantities associated with CT scans. PMID:25741103

  9. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  10. Placement of clock gates in time-of-flight optoelectronic circuits

    NASA Astrophysics Data System (ADS)

    Feehrer, John R.; Jordan, Harry F.

    1995-12-01

    Time-of-flight synchronized optoelectronic circuits capitalize on the highly controllable delays of optical waveguides. Circuits have no latches; synchronization is achieved by adjustment of the lengths of waveguides that connect circuit elements. Clock gating and pulse stretching are used to restore timing and power. A functional circuit requires that every feedback loop contain at least one clock gate to prevent cumulative timing drift and power loss. A designer specifies an ideal circuit, which contains no or very few clock gates. To make the circuit functional, we must identify locations in which to place clock gates. Because clock gates are expensive, add area, and increase delay, a minimal set of locations is desired. We cast this problem in graph-theoretical form as the minimum feedback edge set problem and solve it by using an adaptation of an algorithm proposed in 1966 [IEEE Trans. Circuit Theory CT-13, 399 (1966)]. We discuss a computer-aided-design implementation of the algorithm that reduces computational complexity and demonstrate it on a set of circuits.

  11. Practical multipeptide synthesis: dedicated software for the definition of multiple, overlapping peptides covering polypeptide sequences.

    PubMed

    Heegaard, P M; Holm, A; Hagerup, M

    1993-01-01

    A personal computer program for the conversion of linear amino acid sequences to multiple, small, overlapping peptide sequences has been developed. Peptide lengths and "jumps" (the distance between two consecutive overlapping peptides) are defined by the user. To facilitate the use of the program for parallel solid-phase chemical peptide syntheses for the synchronous production of multiple peptides, amino acids at each acylation step are laid out by the program in a convenient standard multi-well setup. Also, the total number of equivalents, as well as the derived amount in milligrams (depend-ending on user-defined equivalent weights and molar surplus), of each amino acid are given. The program facilitates the implementation of multipeptide synthesis, e.g., for the elucidation of polypeptide structure-function relationships, and greatly reduces the risk of introducing mistakes at the planning step. It is written in Pascal and runs on any DOS-based personal computer. No special graphic display is needed.

  12. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  13. Numerical calculation of boundary layers and wake characteristics of high-speed trains with different lengths

    PubMed Central

    Zhou, Dan; Niu, Jiqiang

    2017-01-01

    Trains with different numbers of cars running in the open air were simulated using the delayed detached-eddy simulation (DDES). The numbers of cars included in the simulation are 3, 4, 5 and 8. The aim of this study was to investigate how train length influences the boundary layer, the wake flow, the surface pressure, the aerodynamic drag and the friction drag. To certify the accuracy of the mesh and methods, the drag coefficients from numerical simulation of trains with 3 cars were compared with those from the wind tunnel test, and agreement was obtained. The results show that the boundary layer is thicker and the wake vortices are less symmetric as the train length increases. As a result, train length greatly affects pressure. The upper surface pressure of the tail car reduced by 2.9%, the side surface pressure of the tail car reduced by 8.3% and the underneath surface pressure of the tail car reduced by 19.7% in trains that included 3 cars to those including 8 cars. In addition, train length also has a significant effect on the friction drag coefficient and the drag coefficient. The friction drag coefficient of each car in a configuration decreases along the length of the train. In a comparison between trains consisting of 3 cars to those consisting of 8 cars, the friction drag coefficient of the tail car reduced by 8.6% and the drag coefficient of the tail car reduced by 3.7%. PMID:29261758

  14. Association between playing computer games and mental and social health among male adolescents in Iran in 2014

    PubMed Central

    Mohammadi, Mehrnoosh; RezaeiDehaghani, Abdollah; Mehrabi, Tayebeh; RezaeiDehaghani, Ali

    2016-01-01

    Background: As adolescents spend much time on playing computer games, their mental and social effects should be considered. The present study aimed to investigate the association between playing computer games and the mental and social health among male adolescents in Iran in 2014. Materials and Methods: This is a cross-sectional study conducted on 210 adolescents selected by multi-stage random sampling. Data were collected by Goldberg and Hillier general health (28 items) and Kiez social health questionnaires. The association was tested by Pearson and Spearman correlation coefficients, one-way analysis of variance (ANOVA), and independent t-test. Computer games related factors such as the location, type, length, the adopted device, and mode of playing games were investigated. Results: Results showed that 58.9% of the subjects played games on a computer alone for 1 h at home. Results also revealed that the subjects had appropriate mental health and 83.2% had moderate social health. Results showed a poor significant association between the length of games and social health (r = −0.15, P = 0.03), the type of games and mental health (r = −0.16, P = 0.01), and the device used in playing games and social health (F = 0.95, P = 0.03). Conclusions: The findings showed that adolescents’ mental and social health is negatively associated with their playing computer games. Therefore, to promote their health, educating them about the correct way of playing computer games is essential and their parents and school authorities, including nurses working at schools, should determine its relevant factors such as the type, length, and device used in playing such games. PMID:27095988

  15. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  16. Improving agreement between static method and dynamic formula for driven cast-in-place piles.

    DOT National Transportation Integrated Search

    2013-06-01

    This study focuses on comparing the capacities and lengths of piling necessary as determined with a static method and with a dynamic formula. Pile capacities and their required lengths are determined two ways: 1) using a design and computed method, s...

  17. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.

    PubMed

    Yang, Yingdong; Mao, Xuchu; Tian, Weifeng

    2016-06-08

    Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  18. The effect of intrinsic muscular nonlinearities on the energetics of locomotion in a computational model of an anguilliform swimmer.

    PubMed

    Hamlet, Christina; Fauci, Lisa J; Tytell, Eric D

    2015-11-21

    Animals move through their environments using muscles to produce force. When an animal׳s nervous system activates a muscle, the muscle produces different amounts of force depending on its length, its shortening velocity, and its time history of force production. These muscle forces interact with forces from passive tissue properties and forces from the external environment. Using an integrative computational model that couples an elastic, actuated model of an anguilliform, lamprey-like swimmer with a surrounding Navier-Stokes fluid, we study the effects of this coupling between the muscle force and the body motion. Swimmers with different forms of this coupling can achieve similar motions, but use different amounts of energy. The velocity dependence is the most important property of the ones we considered for reducing energy costs and helping us to stabilize oscillations. These effects are strongly influenced by how rapidly the muscle deactivates; if force decays too slowly, muscles on opposite sides of the body end up fighting each other, increasing energy cost. Work-dependent deactivation, an effect that causes a muscle to deactivate more rapidly if it has recently produced mechanical work, works together with the velocity dependence to reduce the energy cost of swimming. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A methodology for direct quantification of over-ranging length in helical computed tomography with real-time dosimetry.

    PubMed

    Tien, Christopher J; Winslow, James F; Hintenlang, David E

    2011-01-31

    In helical computed tomography (CT), reconstruction information from volumes adjacent to the clinical volume of interest (VOI) is required for proper reconstruction. Previous studies have relied upon either operator console readings or indirect extrapolation of measurements in order to determine the over-ranging length of a scan. This paper presents a methodology for the direct quantification of over-ranging dose contributions using real-time dosimetry. A Siemens SOMATOM Sensation 16 multislice helical CT scanner is used with a novel real-time "point" fiber-optic dosimeter system with 10 ms temporal resolution to measure over-ranging length, which is also expressed in dose-length-product (DLP). Film was used to benchmark the exact length of over-ranging. Over-ranging length varied from 4.38 cm at pitch of 0.5 to 6.72 cm at a pitch of 1.5, which corresponds to DLP of 131 to 202 mGy-cm. The dose-extrapolation method of Van der Molen et al. yielded results within 3%, while the console reading method of Tzedakis et al. yielded consistently larger over-ranging lengths. From film measurements, it was determined that Tzedakis et al. overestimated over-ranging lengths by one-half of beam collimation width. Over-ranging length measured as a function of reconstruction slice thicknesses produced two linear regions similar to previous publications. Over-ranging is quantified with both absolute length and DLP, which contributes about 60 mGy-cm or about 10% of DLP for a routine abdominal scan. This paper presents a direct physical measurement of over-ranging length within 10% of previous methodologies. Current uncertainties are less than 1%, in comparison with 5% in other methodologies. Clinical implantation can be increased by using only one dosimeter if codependence with console readings is acceptable, with an uncertainty of 1.1% This methodology will be applied to different vendors, models, and postprocessing methods--which have been shown to produce over-ranging lengths differing by 125%.

  20. Influence of Gender and Age on Upper-Airway Length During Development

    PubMed Central

    Ronen, Ohad; Malhotra, Atul; Pillar, Giora

    2008-01-01

    OBJECTIVE Obstructive sleep apnea has a strong male predominance in adults but not in children. The collapsible portion of the upper airway is longer in adult men than in women (a property that may increase vulnerability to collapse during sleep). We sought to test the hypothesis that in prepubertal children, pharyngeal airway length is equal between genders, but after puberty boys have a longer upper airway than girls, thus potentially contributing to this change in apnea propensity. METHODS Sixty-nine healthy boys and girls who had undergone computed tomography scans of their neck for other reasons were selected from the computed tomography archives of Rambam and Carmel hospitals. The airway length was measured in the midsagittal plane and defined as the length between the lower part of the posterior hard palate and the upper limit of the hyoid bone. Airway length and normalized airway length/body height were compared between the genders in prepubertal (4- to 10-year-old) and postpubertal (14- to 19-year-old) children. RESULTS In prepubertal children, airway length was similar between boys and girls (43.2 ± 5.9 vs 46.8 ± 7.7 mm, respectively). When normalized to body height, airway length/body height was significantly shorter in prepubertal boys than in girls (0.35 ± 0.03 vs 0.38 ± 0.04 mm/cm). In contrast, postpubertal boys had longer upper airways (66.5 ± 9.2 vs 52.2 ± 7.0 mm) and normalized airway length/body height (0.38 ± 0.05 vs 0.33 ± 0.05 mm/cm) than girls. CONCLUSIONS Although boys have equal or shorter airway length compared with girls among prepubertal children, after puberty, airway length and airway length normalized for body height are significantly greater in boys than in girls. These data suggest that important anatomic changes at puberty occur in a gender-specific manner, which may be important in explaining the male predisposition to pharyngeal collapse in adults. PMID:17908723

  1. Exposure of xenopus laevis tadpoles to cadmium reveals concentration-dependent bimodal effects on growth and monotonic effects on development and thyroid gland activity

    USGS Publications Warehouse

    Sharma, Bibek; Patino, R.

    2008-01-01

    Xenopus laevis were exposed to 0-855 ??g cadmium (Cd)/l (measured concentrations) in FETAX medium from fertilization to 47 days postfertilization. Measurements included embryonic survival and, at 47 days, tadpole survival, snout-vent length, tail length, total length, hindlimb length, weight, Nieuwkoop-Faber (NF) stage of development, initiation of metamorphic climax (??? NF 58), and thyroid follicle cell height. Embryonic and larval survival were unaffected by Cd. Relative to control tadpoles, reduced tail and total length were observed at 0.1- 8 and at 855 ??g Cd/l; and reduced snout-vent length, hindlimb length, and weight were observed at 0.1-1 and at 855 ??g Cd/l. Mean stage of development and rate of initiation of climax were unaffected by Cd at 0-84 ??g/l; however, none of the tadpoles exposed to 855 ??g Cd/l progressed beyond mid-premetamorphosis (NF 51). Thyroid glands with fully formed follicles were observed in all tadpoles ??? NF 49 examined. Follicle cell height was unaffected by Cd at 0-84 ??g/l but it was reduced at 855 ??g/l; in the latter, cell height was reduced even when compared with NF 49-51 tadpoles pooled from the 0 to 84 ??g Cd/l groups. In conclusion, (1) Cd affected tadpole growth in a bimodal pattern with the first and second inhibitory modes at concentrations below and above 84 ??g Cd/l, respectively; (2) exposure to high Cd concentrations (855 ??g/l) reduced thyroid activity and arrested tadpole development at mid-premetamorphosis; and (3) unlike its effect on growth, Cd inhibited tadpole development and thyroid function in a seemingly monotonic pattern.

  2. Computationally efficient models of neuromuscular recruitment and mechanics.

    PubMed

    Song, D; Raphael, G; Lan, N; Loeb, G E

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  3. Bending of an Infinite beam on a base with two parameters in the absence of a part of the base

    NASA Astrophysics Data System (ADS)

    Aleksandrovskiy, Maxim; Zaharova, Lidiya

    2018-03-01

    Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.

  4. Computationally efficient models of neuromuscular recruitment and mechanics

    NASA Astrophysics Data System (ADS)

    Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  5. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    NASA Astrophysics Data System (ADS)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2017-02-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  6. Isolated effect of geometry on mitral valve function for in silico model development.

    PubMed

    Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj

    2015-01-01

    Computational models for the heart's mitral valve (MV) exhibit several uncertainties that may be reduced by further developing these models using ground-truth data-sets. This study generated a ground-truth data-set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle (PM) displacement and asymmetric PM displacement on leaflet coaptation, mitral regurgitation (MR) and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile haemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction and anterior leaflet strain in the radial and circumferential directions were successfully quantified at increasing levels of geometric distortion. From these data, increase in the levels of isolated PM displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase) and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric PM displacement. Asymmetric PM displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data-set may be used to parametrically evaluate and develop modelling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools.

  7. Orbitals, Occupation Numbers, and Band Structure of Short One-Dimensional Cadmium Telluride Polymers.

    PubMed

    Valentine, Andrew J S; Talapin, Dmitri V; Mazziotti, David A

    2017-04-27

    Recent work found that soldering CdTe quantum dots together with a molecular CdTe polymer yielded field-effect transistors with much greater electron mobility than quantum dots alone. We present a computational study of the CdTe polymer using the active-space variational two-electron reduced density matrix (2-RDM) method. While analogous complete active-space self-consistent field (CASSCF) methods scale exponentially with the number of active orbitals, the active-space variational 2-RDM method exhibits polynomial scaling. A CASSCF calculation using the (48o,64e) active space studied in this paper requires 10 24 determinants and is therefore intractable, while the variational 2-RDM method in the same active space requires only 2.1 × 10 7 variables. Natural orbitals, natural-orbital occupations, charge gaps, and Mulliken charges are reported as a function of polymer length. The polymer, we find, is strongly correlated, despite possessing a simple sp 3 -hybridized bonding scheme. Calculations reveal the formation of a nearly saturated valence band as the polymer grows and a charge gap that decreases sharply with polymer length.

  8. A mathematical model for the movement of food bolus of varying viscosities through the esophagus

    NASA Astrophysics Data System (ADS)

    Tripathi, Dharmendra

    2011-09-01

    This mathematical model is designed to study the influence of viscosity on swallowing of food bolus through the esophagus. Food bolus is considered as viscous fluid with variable viscosity. Geometry of esophagus is assumed as finite length channel and flow is induced by peristaltic wave along the length of channel walls. The expressions for axial velocity, transverse velocity, pressure gradient, volume flow rate and stream function are obtained under the assumptions of long wavelength and low Reynolds number. The impacts of viscosity parameter on pressure distribution, local wall shear stress, mechanical efficiency and trapping are numerically discussed with the help of computational results. On the basis of presented study, it is revealed that swallowing of low viscous fluids through esophagus requires less effort in comparison to fluids of higher viscosity. This result is similar to the experimental result obtained by Raut et al. [1], Dodds [2] and Ren et al. [3]. It is further concluded that the pumping efficiency increases while size of trapped bolus reduces when viscosity of fluid is high.

  9. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  10. Electron temperature critical gradient and transport stiffness in DIII-D

    DOE PAGES

    Smith, Sterling P.; Petty, Clinton C.; White, Anne E.; ...

    2015-07-06

    The electron energy flux has been probed as a function of electron temperature gradient on the DIII-D tokamak, in a continuing effort to validate turbulent transport models. In the scan of gradient, a critical electron temperature gradient has been found in the electron heat fluxes and stiffness at various radii in L-mode plasmas. The TGLF reduced turbulent transport model [G.M. Staebler et al, Phys. Plasmas 14, 055909 (2007)] and full gyrokinetic GYRO model [J. Candy and R.E. Waltz, J. Comput. Phys. 186, 545 (2003)] recover the general trend of increasing electron energy flux with increasing electron temperature gradient scale length,more » but they do not predict the absolute level of transport at all radii and gradients. Comparing the experimental observations of incremental (heat pulse) diffusivity and stiffness to the models’ reveals that TGLF reproduces the trends in increasing diffusivity and stiffness with increasing electron temperature gradient scale length with a critical gradient behavior. Furthermore, the critical gradient of TGLF is found to have a dependence on q 95, contrary to the independence of the experimental critical gradient from q 95.« less

  11. Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.

    PubMed

    Chevallier, Maguelonne; Krauth, Werner

    2007-11-01

    We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.

  12. Free-Space Quantum Signatures Using Heterodyne Measurements.

    PubMed

    Croal, Callum; Peuntinger, Christian; Heim, Bettina; Khan, Imran; Marquardt, Christoph; Leuchs, Gerd; Wallden, Petros; Andersson, Erika; Korolkova, Natalia

    2016-09-02

    Digital signatures guarantee the authorship of electronic communications. Currently used "classical" signature schemes rely on unproven computational assumptions for security, while quantum signatures rely only on the laws of quantum mechanics to sign a classical message. Previous quantum signature schemes have used unambiguous quantum measurements. Such measurements, however, sometimes give no result, reducing the efficiency of the protocol. Here, we instead use heterodyne detection, which always gives a result, although there is always some uncertainty. We experimentally demonstrate feasibility in a real environment by distributing signature states through a noisy 1.6 km free-space channel. Our results show that continuous-variable heterodyne detection improves the signature rate for this type of scheme and therefore represents an interesting direction in the search for practical quantum signature schemes. For transmission values ranging from 100% to 10%, but otherwise assuming an ideal implementation with no other imperfections, the signature length is shorter by a factor of 2 to 10. As compared with previous relevant experimental realizations, the signature length in this implementation is several orders of magnitude shorter.

  13. USM3D Analysis of Low Boom Configuration

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Campbell, Richard L.; Nayani, Sudheer N.

    2011-01-01

    In the past few years considerable improvement was made in NASA's in house boom prediction capability. As part of this improved capability, the USM3D Navier-Stokes flow solver, when combined with a suitable unstructured grid, went from accurately predicting boom signatures at 1 body length to 10 body lengths. Since that time, the research emphasis has shifted from analysis to the design of supersonic configurations with boom signature mitigation In order to design an aircraft, the techniques for accurately predicting boom and drag need to be determined. This paper compares CFD results with the wind tunnel experimental results conducted on a Gulfstream reduced boom and drag configuration. Two different wind-tunnel models were designed and tested for drag and boom data. The goal of this study was to assess USM3D capability for predicting both boom and drag characteristics. Overall, USM3D coupled with a grid that was sheared and stretched was able to reasonably predict boom signature. The computational drag polar matched the experimental results for a lift coefficient above 0.1 despite some mismatch in the predicted lift-curve slope.

  14. Determining physiological cross-sectional area of extensor carpi radialis longus and brevis as a whole and by regions using 3D computer muscle models created from digitized fiber bundle data.

    PubMed

    Ravichandiran, Kajeandra; Ravichandiran, Mayoorendra; Oliver, Michele L; Singh, Karan S; McKee, Nancy H; Agur, Anne M R

    2009-09-01

    Architectural parameters and physiological cross-sectional area (PCSA) are important determinants of muscle function. Extensor carpi radialis longus (ECRL) and brevis (ECRB) are used in muscle transfers; however, their regional architectural differences have not been investigated. The aim of this study is to develop computational algorithms to quantify and compare architectural parameters (fiber bundle length, pennation angle, and volume) and PCSA of ECRL and ECRB. Fiber bundles distributed throughout the volume of ECRL (75+/-20) and ECRB (110+/-30) were digitized in eight formalin embalmed cadaveric specimens. The digitized data was reconstructed in Autodesk Maya with computational algorithms implemented in Python. The mean PCSA and fiber bundle length were significantly different between ECRL and ECRB (p < or = 0.05). Superficial ECRL had significantly longer fiber bundle length than the deep region, whereas the PCSA of superficial ECRB was significantly larger than the deep region. The regional quantification of architectural parameters and PCSA provides a framework for the exploration of partial tendon transfers of ECRL and ECRB.

  15. Real-time line matching from stereo images using a nonparametric transform of spatial relations and texture information

    NASA Astrophysics Data System (ADS)

    Park, Jonghee; Yoon, Kuk-Jin

    2015-02-01

    We propose a real-time line matching method for stereo systems. To achieve real-time performance while retaining a high level of matching precision, we first propose a nonparametric transform to represent the spatial relations between neighboring lines and nearby textures as a binary stream. Since the length of a line can vary across images, the matching costs between lines are computed within an overlap area (OA) based on the binary stream. The OA is determined for each line pair by employing the properties of a rectified image pair. Finally, the line correspondence is determined using a winner-takes-all method with a left-right consistency check. To reduce the computational time requirements further, we filter out unreliable matching candidates in advance based on their rectification properties. The performance of the proposed method was compared with state-of-the-art methods in terms of the computational time, matching precision, and recall. The proposed method required 47 ms to match lines from an image pair in the KITTI dataset with an average precision of 95%. We also verified the proposed method under image blur, illumination variation, and viewpoint changes.

  16. A modeling approach to predict acoustic nonlinear field generated by a transmitter with an aluminum lens.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Chen, Tao; Li, Faqi; Zhang, Dong

    2011-09-01

    In this work, the authors propose a modeling approach to compute the nonlinear acoustic field generated by a flat piston transmitter with an attached aluminum lens. In this approach, the geometrical parameters (radius and focal length) of a virtual source are initially determined by Snell's refraction law and then adjusted based on the Rayleigh integral result in the linear case. Then, this virtual source is used with the nonlinear spheroidal beam equation (SBE) model to predict the nonlinear acoustic field in the focal region. To examine the validity of this approach, the calculated nonlinear result is compared with those from the Westervelt and (Khokhlov-Zabolotskaya-Kuznetsov) KZK equations for a focal intensity of 7 kW/cm(2). Results indicate that this approach could accurately describe the nonlinear acoustic field in the focal region with less computation time. The proposed modeling approach is shown to accurately describe the nonlinear acoustic field in the focal region. Compared with the Westervelt equation, the computation time of this approach is significantly reduced. It might also be applicable for the widely used concave focused transmitter with a large aperture angle.

  17. Efficient Optimization of Low-Thrust Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul

    2007-01-01

    A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.

  18. Experimental and Computational Sonic Boom Assessment of Lockheed-Martin N+2 Low Boom Models

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Durston, Donald A.; Elmiligui, Alaa A.; Walker, Eric L.; Carter, Melissa B.

    2015-01-01

    Flight at speeds greater than the speed of sound is not permitted over land, primarily because of the noise and structural damage caused by sonic boom pressure waves of supersonic aircraft. Mitigation of sonic boom is a key focus area of the High Speed Project under NASA's Fundamental Aeronautics Program. The project is focusing on technologies to enable future civilian aircraft to fly efficiently with reduced sonic boom, engine and aircraft noise, and emissions. A major objective of the project is to improve both computational and experimental capabilities for design of low-boom, high-efficiency aircraft. NASA and industry partners are developing improved wind tunnel testing techniques and new pressure instrumentation to measure the weak sonic boom pressure signatures of modern vehicle concepts. In parallel, computational methods are being developed to provide rapid design and analysis of supersonic aircraft with improved meshing techniques that provide efficient, robust, and accurate on- and off-body pressures at several body lengths from vehicles with very low sonic boom overpressures. The maturity of these critical parallel efforts is necessary before low-boom flight can be demonstrated and commercial supersonic flight can be realized.

  19. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  20. Large-scale detection of repetitions

    PubMed Central

    Smyth, W. F.

    2014-01-01

    Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872

  1. Upper Grades Ideas.

    ERIC Educational Resources Information Center

    Thornburg, David; Beane, Pam

    1983-01-01

    Presents programming ideas using LOGO, activity for converting flowchart into a computer program, and a Pascal program for generating music using paddles. Includes the article "Helping Computers Adapt to Kids" by Philip Nothnagle; a program for estimating length of lines is included. (JN)

  2. Real-time label-free quantitative fluorescence microscopy-based detection of ATP using a tunable fluorescent nano-aptasensor platform

    NASA Astrophysics Data System (ADS)

    Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung

    2015-11-01

    Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr05839b

  3. Contact Graph Routing

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic. The information carried by CGR contact plan messages is useful not only for dynamic route computation, but also for the implementation of rate control, congestion forecasting, transmission episode initiation and termination, timeout interval computation, and retransmission timer suspension and resumption.

  4. A Scheme for Text Analysis Using Fortran.

    ERIC Educational Resources Information Center

    Koether, Mary E.; Coke, Esther U.

    Using string-manipulation algorithms, FORTRAN computer programs were designed for analysis of written material. The programs measure length of a text and its complexity in terms of the average length of words and sentences, map the occurrences of keywords or phrases, calculate word frequency distribution and certain indicators of style. Trials of…

  5. Retrospective cohort study of an enhanced recovery programme in oesophageal and gastric cancer surgery

    PubMed Central

    Gatenby, PAC; Shaw, C; Hine, C; Scholtes, S; Koutra, M; Andrew, H; Hacking, M; Allum, WH

    2015-01-01

    Introduction Enhanced recovery programmes have been established in some areas of elective surgery. This study applied enhanced recovery principles to elective oesophageal and gastric cancer surgery. Methods An enhanced recovery programme for patients undergoing open oesophagogastrectomy, total and subtotal gastrectomy for oesophageal and gastric malignancy was designed. A retrospective cohort study compared length of stay on the critical care unit (CCU), total length of inpatient stay, rates of complications and in-hospital mortality prior to (35 patients) and following (27 patients) implementation. Results In the cohort study, the median total length of stay was reduced by 3 days following oesophagogastrectomy and total gastrectomy. The median length of stay on the CCU remained the same for all patients. The rates of complications and mortality were the same. Conclusions The standardised protocol reduced the median overall length of stay but did not reduce CCU stay. Enhanced recovery principles can be applied to patients undergoing major oesophagogastrectomy and total gastrectomy as long as they have minimal or reversible co-morbidity. PMID:26414360

  6. The giant protein titin regulates the length of the striated muscle thick filament.

    PubMed

    Tonino, Paola; Kiss, Balazs; Strom, Josh; Methawasin, Mei; Smith, John E; Kolb, Justin; Labeit, Siegfried; Granzier, Henk

    2017-10-19

    The contractile machinery of heart and skeletal muscles has as an essential component the thick filament, comprised of the molecular motor myosin. The thick filament is of a precisely controlled length, defining thereby the force level that muscles generate and how this force varies with muscle length. It has been speculated that the mechanism by which thick filament length is controlled involves the giant protein titin, but no conclusive support for this hypothesis exists. Here we show that in a mouse model in which we deleted two of titin's C-zone super-repeats, thick filament length is reduced in cardiac and skeletal muscles. In addition, functional studies reveal reduced force generation and a dilated cardiomyopathy (DCM) phenotype. Thus, regulation of thick filament length depends on titin and is critical for maintaining muscle health.

  7. The Impact of Computerization on Library Support Staff: A Study of Support Staff in Academic Libraries in Wisconsin.

    ERIC Educational Resources Information Center

    Palmini, Cathleen C.

    1994-01-01

    Describes a survey of Wisconsin academic library support staff that explored the effects of computerization of libraries on work and job satisfaction. Highlights include length of employment; time spent at computer terminals; training; computer background; computers as timesavers; influence of automation on effectiveness; and job frustrations.…

  8. Software for computing plant biomass—BIOPAK users guide.

    Treesearch

    Joseph E. Means; Heather A. Hansen; Greg J. Koerper; Paul B Alaback; Mark W. Klopsch

    1994-01-01

    BIOPAK is a menu-driven package of computer programs for IBM-compatible personal computers that calculates the biomass, area, height, length, or volume of plant components (leaves, branches, stem, crown, and roots). The routines were written in FoxPro, Fortran, and C.BIOPAK was created to facilitate linking of a diverse array of vegetation datasets with the...

  9. Useful integral function and its application in thermal radiation calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, S.L.; Rhee, K.T.

    1983-07-01

    In applying the Planck formula for computing the energy radiated from an isothermal source, the emissivity of the source must be found. This emissivity is expressed in terms of its spectral emissivity. This spectral emissivity of an isothermal volume with a given optical length containing radiating gases and/or soot, is computed through a relation (Sparrow and Cess, 1978) that contains the optical length and the spectral volume absorption coefficient. An exact solution is then offered to the equation that results from introducing the equation for the spectral emissivity into the equation for the emissivity. The function obtained is shown tomore » be useful in computing the spectral emissivity of an isothermal volume containing either soot or gaseous species, or both. Examples are presented.« less

  10. Comparison of rate of surgical wound infection, length of hospital stay and patient convenience in complicated appendicitis between primary closure and delayed primary closure.

    PubMed

    Khan, Khizar Ishtiaque; Mahmood, Shahid; Akmal, Muhammad; Waqas, Ahmed

    2012-06-01

    To compare the difference in the rate of surgical wound infection, patient's convenience and length of hospital stay between Primary Closure and Delayed Primary Closure in cases of complicated appendicitis in adults. This randomised control trial was conducted at the Combined Military Hospital, Kharian and Malir from June 5, 2006, to September 10, 2009. Patients > or = 15 years of both gender who underwent appendectomy through grid iron or Lanz incision and having complicated appendicitis were included. The 100 patients who were included in the study out of the initial size of 393, were randomised into two equal groups of 50 each (Group A: Primary Closure; Group B: Delayed Primary Closure) using a computer-generated table. All the surgeries were done by the same surgeon and the operative steps and antibiotic coverage were standardised. The rate of surgical wound infection, patient's convenience (on visual analogue scale in mm) and the length of hospital stay were recorded. Data was analysed using SPSS version 11, and p value was calculated. Demographic data, comorbids and medication of both the groups was comparable. There was no significant difference in rate of surgical wound infection (p > 0.05). The difference in patient's convenience and length of hospital stay were significant (p < 0.05), showing superiority of Primary Closure over Delayed Primary Closure with no added morbidity/mortality. Primary Closure in complicated appendicitis not only reduces the cost of treatment, but is also more convenient and satisfying for the patients, with no added risk of surgical wound infection.

  11. Heat Transfer Computations of Internal Duct Flows With Combined Hydraulic and Thermal Developing Length

    NASA Technical Reports Server (NTRS)

    Wang, C. R.; Towne, C. E.; Hippensteele, S. A.; Poinsatte, P. E.

    1997-01-01

    This study investigated the Navier-Stokes computations of the surface heat transfer coefficients of a transition duct flow. A transition duct from an axisymmetric cross section to a non-axisymmetric cross section, is usually used to connect the turbine exit to the nozzle. As the gas turbine inlet temperature increases, the transition duct is subjected to the high temperature at the gas turbine exit. The transition duct flow has combined development of hydraulic and thermal entry length. The design of the transition duct required accurate surface heat transfer coefficients. The Navier-Stokes computational method could be used to predict the surface heat transfer coefficients of a transition duct flow. The Proteus three-dimensional Navier-Stokes numerical computational code was used in this study. The code was first studied for the computations of the turbulent developing flow properties within a circular duct and a square duct. The code was then used to compute the turbulent flow properties of a transition duct flow. The computational results of the surface pressure, the skin friction factor, and the surface heat transfer coefficient were described and compared with their values obtained from theoretical analyses or experiments. The comparison showed that the Navier-Stokes computation could predict approximately the surface heat transfer coefficients of a transition duct flow.

  12. Impact of a hospitalist system on length of stay and cost for children with common conditions.

    PubMed

    Srivastava, Rajendu; Landrigan, Christopher P; Ross-Degnan, Dennis; Soumerai, Stephen B; Homer, Charles J; Goldmann, Donald A; Muret-Wagstaff, Sharon

    2007-08-01

    This study examined mechanisms of efficiency in a managed care hospitalist system on length of stay and total costs for common pediatric conditions. We conducted a retrospective cohort study (October 1993 to July 1998) of patients in a not-for-profit staff model (HMO 1) and a non-staff-model (HMO 2) managed care organization at a freestanding children's hospital. HMO 1 introduced a hospitalist system for patients in October 1996. Patients were included if they had 1 of 3 common diagnoses: asthma, dehydration, or viral illness. Linear regression models examining length-of-stay-specific costs for prehospitalist and posthospitalist systems were built. Distribution of length of stay for each diagnosis before and after the system change in both study groups was calculated. Interrupted time series analysis tested whether changes in the trends of length of stay and total costs occurred after implementation of the hospitalist system by HMO1 (HMO 2 as comparison group) for all 3 diagnoses combined. A total of 1970 patients with 1 of the 3 study conditions were cared for in HMO 1, and 1001 in HMO 2. After the hospitalist system was introduced in HMO 1, length of stay was reduced by 0.23 days (13%) for asthma and 0.19 days (11%) for dehydration; there was no difference for patients with viral illness. The largest relative reduction in length of stay occurred in patients with a shorter length of stay whose hospitalizations were reduced from 2 days to 1 day. This shift resulted in an average cost-per-case reduction of $105.51 (9.3%) for patients with asthma and $86.22 (7.8%) for patients with dehydration. During the same period, length of stay and total cost rose in HMO 2. Introduction of a hospitalist system in one health maintenance organization resulted in earlier discharges and reduced costs for children with asthma and dehydration compared with another one, with the largest reductions occurring in reducing some 2-day hospitalizations to 1 day. These findings suggest that hospitalists can increase efficiency and reduce costs for children with common pediatric conditions.

  13. Programmable Pulse Generator

    NASA Technical Reports Server (NTRS)

    Rhim, W. K.; Dart, J. A.

    1982-01-01

    New pulse generator programmed to produce pulses from several ports at different pulse lengths and intervals and virtually any combination and sequence. Unit contains a 256-word-by-16-bit memory loaded with instructions either manually or by computer. Once loaded, unit operates independently of computer.

  14. Annuity-estimating program

    NASA Technical Reports Server (NTRS)

    Jillie, D. W.

    1979-01-01

    Program computes benefits and other relevant factors for Federal Civil Service employees. Computed information includes retirement annuity, survivor annuity for each retirement annuity, highest average annual consecutive 3-year salary, length of service including credit for unused sick leave, amount of deposit and redeposit plus interest.

  15. Estimating age at a specified length from the von Bertalanffy growth function

    USGS Publications Warehouse

    Ogle, Derek H.; Isermann, Daniel A.

    2017-01-01

    Estimating the time required (i.e., age) for fish in a population to reach a specific length (e.g., legal harvest length) is useful for understanding population dynamics and simulating the potential effects of length-based harvest regulations. The age at which a population reaches a specific mean length is typically estimated by fitting a von Bertalanffy growth function to length-at-age data and then rearranging the best-fit equation to solve for age at the specified length. This process precludes the use of standard frequentist methods to compute confidence intervals and compare estimates of age at the specified length among populations. We provide a parameterization of the von Bertalanffy growth function that has age at a specified length as a parameter. With this parameterization, age at a specified length is directly estimated, and standard methods can be used to construct confidence intervals and make among-group comparisons for this parameter. We demonstrate use of the new parameterization with two data sets.

  16. DTI Tractography and White Matter Fiber Tract Characteristics in Euthymic Bipolar I Patients and Healthy Control Subjects

    PubMed Central

    Irimia, Andrei; Leow, Alex D.; Bartzokis, George; Moody, Teena D.; Jennings, Robin G.; Alger, Jeffry R.; Van Horn, John Darrell; Altshuler, Lori L.

    2012-01-01

    With the introduction of diffusion tensor imaging (DTI), structural differences in white matter (WM) architecture between psychiatric populations and healthy controls can be systematically observed and measured. In particular, DTI-tractography can be used to assess WM characteristics over the entire extent of WM tracts and aggregated fiber bundles. Using 64-direction DTI scanning in 27 participants with bipolar disorder (BD) and 26 age-and-gender-matched healthy control subjects, we compared relative length, density, and fractional anisotrophy (FA) of WM tracts involved in emotion regulation or theorized to be important neural components in BD neuropathology. We interactively isolated 22 known white matter tracts using region-of-interest placement (TrackVis software program) and then computed relative tract length, density, and integrity. BD subjects demonstrated significantly shorter WM tracts in the genu, body and splenium of the corpus callosum compared to healthy controls. Additionally, bipolar subjects exhibited reduced fiber density in the genu and body of the corpus callosum, and in the inferior longitudinal fasciculus bilaterally. In the left uncinate fasciculus, however, BD subjects exhibited significantly greater fiber density than healthy controls. There were no significant differences between groups in WM tract FA for those tracts that began and ended in the brain. The significance of differences in tract length and fiber density in BD is discussed. PMID:23070746

  17. Differences in stride between healthy ostriches (Struthio camelus) and those affected by tibiotarsal rotation.

    PubMed

    Cooper, R G

    2007-03-01

    Twenty healthy ostriches (ten cocks and ten hens), and twenty birds with tibiotarsal rotation (nine cocks and 11 hens) (14 months old) were isolated, hooded and weighed. A run (50 m x 2.5 m) was divided into sections marked 5 m, 10 m, 15 m and 20 m. Time taken for each bird to pass these points was recorded and speed computed. The degree of tibiotarsal rotation in the right foot was mean +/- SEM, 156 +/- 2.69 degrees. Comparisons between left and right foot length in healthy birds showed no significant differences. Foot length was significantly lower in tibiotarsal rotation (P = 0.03). The right foot in tibiotarsal rotation was significantly shorter than the left foot. The number of strides per each 5 m division were significantly (P < 0.05) greater in tibiotarsal rotation by comparison with healthy birds. At 20 m, healthy cocks had more strides than hens. The stride length in hens was significantly (P < 0.05) greater than cocks at 5, 10 and 15 m, respectively, but lower throughout in tibiotarsal rotation (P = 0.001). The speed of hens was significantly (P < 0.05) greater than cocks. Tibiotarsal rotation resulted in significantly (P < 0.05) reduced speeds. Hens may be able to escape danger faster than cocks. The occurrence of tibiotarsal rotation necessitates consideration of genetics, management, sex, nutrition and growth rates.

  18. Protein expression, characterization and activity comparisons of wild type and mutant DUSP5 proteins

    DOE PAGES

    Nayak, Jaladhi; Gastonguay, Adam J.; Talipov, Marat R.; ...

    2014-12-18

    Background: The mitogen-activated protein kinases (MAPKs) pathway is critical for cellular signaling, and proteins such as phosphatases that regulate this pathway are important for normal tissue development. Based on our previous work on dual specificity phosphatase-5 (DUSP5), and its role in embryonic vascular development and disease, we hypothesized that mutations in DUSP5 will affect its function. Results: In this study, we tested this hypothesis by generating full-length glutathione-S-transferase-tagged DUSP5 and serine 147 proline mutant (S147P) proteins from bacteria. Light scattering analysis, circular dichroism, enzymatic assays and molecular modeling approaches have been performed to extensively characterize the protein form and function.more » We demonstrate that both proteins are active and, interestingly, the S147P protein is hypoactive as compared to the DUSP5 WT protein in two distinct biochemical substrate assays. Furthermore, due to the novel positioning of the S147P mutation, we utilize computational modeling to reconstruct full-length DUSP5 and S147P to predict a possible mechanism for the reduced activity of S147P. Conclusion: Taken together, this is the first evidence of the generation and characterization of an active, full-length, mutant DUSP5 protein which will facilitate future structure-function and drug development-based studies.« less

  19. Synchrotron Microtomographic Quantification of Geometrical Soil Pore Characteristics Affected by Compaction

    NASA Astrophysics Data System (ADS)

    Udawatta, Ranjith; Gantzer, Clark; Anderson, Stephen; Assouline, Shmuel

    2015-04-01

    Soil compaction degrades soil structure and affects water, heat, and gas exchange as well as root penetration and crop production. The objective of this study was to use X-ray computed microtomography (CMT) techniques to compare differences in geometrical soil pore parameters as influenced by compaction of two different aggregate size classes. Sieved (diam. < 2mm and < 0.5mm) and repacked (1.51 and 1.72 Mg m-3) Hamra soil cores of 5- by 5-mm (average porosities were 0.44 and 0.35) were imaged at 9.6-micrometer resolution at the Argonne Advanced Photon Source (synchrotron facility) using X-ray computed microtomography. Images of 58.9 mm3 volume were analyzed using 3-Dimensional Medial Axis (3DMA) software. Geometrical characteristics of the spatial distributions of pore structures (pore radii, volume, connectivity, path length, and tortuosity) were numerically investigated. Results show that the coordination number (CN) distribution and path length (PL) measured from the medial axis were reasonably fit by exponential relationships P(CN)=10-CN/Co and P(PL)=10-PL/PLo, respectively, where Co and PLo are the corresponding characteristic constants. Compaction reduced porosity, average pore size, number of pores, and characteristic constants. The average pore radii (64 and 61 μm; p<0.04), largest pore volume (1.6 and 0.6 mm3; p=0.06), number of pores (55 and 50; p=0.09), characteristic coordination number (6.3 and 6.0; p=0.09), and characteristic path length number (116 and 105; p=0.001) were significantly greater in the low density than the high density treatment. Aggregate size also influenced measured geometrical pore parameters. This analytical technique provides a tool for assessing changes in soil pores that affect hydraulic properties and thereby provides information to assist in assessment of soil management systems.

  20. Synchrotron microtomographic quantification of geometrical soil pore characteristics affected by compaction

    NASA Astrophysics Data System (ADS)

    Udawatta, R. P.; Gantzer, C. J.; Anderson, S. H.; Assouline, S.

    2015-07-01

    Soil compaction degrades soil structure and affects water, heat, and gas exchange as well as root penetration and crop production. The objective of this study was to use X-ray computed microtomography (CMT) techniques to compare differences in geometrical soil pore parameters as influenced by compaction of two different aggregate size classes. Sieved (diam. < 2 mm and < 0.5 mm) and repacked (1.51 and 1.72 Mg m-3) Hamra soil cores of 5- by 5 mm (average porosities were 0.44 and 0.35) were imaged at 9.6-micrometer resolution at the Argonne Advanced Photon Source (synchrotron facility) using X-ray computed microtomography. Images of 58.9 mm3 volume were analyzed using 3-Dimensional Medial Axis (3DMA) software. Geometrical characteristics of the spatial distributions of pore structures (pore radii, volume, connectivity, path length, and tortuosity) were numerically investigated. Results show that the coordination number (CN) distribution and path length (PL) measured from the medial axis were reasonably fit by exponential relationships P(CN) = 10-CN/Co and P(PL) = 10-PL/PLo, respectively, where Co and PLo are the corresponding characteristic constants. Compaction reduced porosity, average pore size, number of pores, and characteristic constants. The average pore radii (63.7 and 61 μm; p < 0.04), largest pore volume (1.58 and 0.58 mm3; p = 0.06), number of pores (55 and 50; p = 0.09), characteristic coordination number (6.32 and 5.94; p = 0.09), and characteristic path length number (116 and 105; p = 0.001) were significantly greater in the low density than the high density treatment. Aggregate size also influenced measured geometrical pore parameters. This analytical technique provides a tool for assessing changes in soil pores that affect hydraulic properties and thereby provides information to assist in assessment of soil management systems.

  1. Acoustic Signature from Flames as a Combustion Diagnostic Tool

    DTIC Science & Technology

    1983-11-01

    empirical visual flame length had to be input to the computer for the inversion method to give good results. That is, if the experiment cnd inversion...method were asked to yield the flame length , poor results were obtained. Since this wa3 part of the information sought for practical application of the...to small experimental uncertainty. The method gave reasonably good results for the open flame but substantial input (the flame length ) had to be

  2. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  3. Does aging with a cortical lesion increase fall-risk: Examining effect of age versus stroke on intensity modulation of reactive balance responses from slip-like perturbations.

    PubMed

    Patel, Prakruti J; Bhatt, Tanvi

    2016-10-01

    We examined whether aging with and without a cerebral lesion such as stroke affects modulation of reactive balance response for recovery from increasing intensity of sudden slip-like stance perturbations. Ten young adults, older age-match adults and older chronic stroke survivors were exposed to three different levels of slip-like perturbations, level 1 (7.75m/s(2)), Level II (12.00m/s(2)) and level III (16.75m/s(2)) in stance. The center of mass (COM) state stability was computed as the shortest distance of the instantaneous COM position and velocity relative to base of support (BOS) from a theoretical threshold for backward loss of balance (BLOB). The COM position (XCOM/BOS) and velocity (ẊCOM/BOS) relative to BOS at compensatory step touchdown, compensatory step length and trunk angle at touchdown were also recorded. At liftoff, stability reduced with increasing perturbation intensity across all groups (main effect of intensity p<0.05). At touchdown, while the young group showed a linear improvement in stability with increasing perturbation intensity, such a trend was absent in other groups (intensity×group interaction, p<0.05). Between-group differences in stability at touchdown were thus observed at levels II and III. Further, greater stability at touchdown positively correlated with anterior XCOM/BOS however not with ẊCOM/BOS. Young adults maintained anterior XCOM/BOS by increasing compensatory step length and preventing greater trunk extension at higher perturbation intensities. The age-match group attempted to increase step length from intensity I to II to maintain stability however could not further increase step length at intensity III, resulting in lower stability on this level compared with the young group. Stroke group on the other hand was unable to modulate compensatory step length or control trunk extension at higher perturbation intensities resulting in reduced stability on levels II and III compared with the other groups. The findings reflect impaired modulation of recovery response with increasing intensity of sudden perturbations among stroke survivors compared with their healthy counter parts. Thus, aging superimposed with a cortical lesion could further impair reactive balance control, potentially contributing toward a higher fall risk in older stroke survivors. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  5. A vessel length-based method to compute coronary fractional flow reserve from optical coherence tomography images.

    PubMed

    Lee, Kyung Eun; Lee, Seo Ho; Shin, Eun-Seok; Shim, Eun Bo

    2017-06-26

    Hemodynamic simulation for quantifying fractional flow reserve (FFR) is often performed in a patient-specific geometry of coronary arteries reconstructed from the images from various imaging modalities. Because optical coherence tomography (OCT) images can provide more precise vascular lumen geometry, regardless of stenotic severity, hemodynamic simulation based on OCT images may be effective. The aim of this study is to perform OCT-FFR simulations by coupling a 3D CFD model from geometrically correct OCT images with a LPM based on vessel lengths extracted from CAG data with clinical validations for the present method. To simulate coronary hemodynamics, we developed a fast and accurate method that combined a computational fluid dynamics (CFD) model of an OCT-based region of interest (ROI) with a lumped parameter model (LPM) of the coronary microvasculature and veins. Here, the LPM was based on vessel lengths extracted from coronary X-ray angiography (CAG) images. Based on a vessel length-based approach, we describe a theoretical formulation for the total resistance of the LPM from a three-dimensional (3D) CFD model of the ROI. To show the utility of this method, we present calculated examples of FFR from OCT images. To validate the OCT-based FFR calculation (OCT-FFR) clinically, we compared the computed OCT-FFR values for 17 vessels of 13 patients with clinically measured FFR (M-FFR) values. A novel formulation for the total resistance of LPM is introduced to accurately simulate a 3D CFD model of the ROI. The simulated FFR values compared well with clinically measured ones, showing the accuracy of the method. Moreover, the present method is fast in terms of computational time, enabling clinicians to provide solutions handled within the hospital.

  6. Geometry-dependent atomic multipole models for the water molecule.

    PubMed

    Loboda, O; Millot, C

    2017-10-28

    Models of atomic electric multipoles for the water molecule have been optimized in order to reproduce the electric potential around the molecule computed by ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set. Different models of increasing complexity, from atomic charges up to models containing atomic charges, dipoles, and quadrupoles, have been obtained. The geometry dependence of these atomic multipole models has been investigated by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For several models, the atomic multipole components have been fitted as a function of the geometry by a Taylor series of fourth order in monomer coordinate displacements.

  7. Analytical Work in Support of the Design and Operation of Two Dimensional Self Streamlining Test Sections

    NASA Technical Reports Server (NTRS)

    Judd, M.; Wolf, S. W. D.; Goodyer, M. J.

    1976-01-01

    A method has been developed for accurately computing the imaginary flow fields outside a flexible walled test section, applicable to lifting and non-lifting models. The tolerances in the setting of the flexible walls introduce only small levels of aerodynamic interference at the model. While it is not possible to apply corrections for the interference effects, they may be reduced by improving the setting accuracy of the portions of wall immediately above and below the model. Interference effects of the truncation of the length of the streamlined portion of a test section are brought to an acceptably small level by the use of a suitably long test section with the model placed centrally.

  8. Geometry-dependent atomic multipole models for the water molecule

    NASA Astrophysics Data System (ADS)

    Loboda, O.; Millot, C.

    2017-10-01

    Models of atomic electric multipoles for the water molecule have been optimized in order to reproduce the electric potential around the molecule computed by ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set. Different models of increasing complexity, from atomic charges up to models containing atomic charges, dipoles, and quadrupoles, have been obtained. The geometry dependence of these atomic multipole models has been investigated by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For several models, the atomic multipole components have been fitted as a function of the geometry by a Taylor series of fourth order in monomer coordinate displacements.

  9. Examining the Efficacy of a Computer Facilitated HIV Prevention Tool in Drug Court

    PubMed Central

    Festinger, David S.; Dugosh, Karen L.; Kurth, Ann E.; Metzger, David S.

    2017-01-01

    Background Although they have demonstrated efficacy in reducing substance use and criminal recidivism, competing priorities and limited resources may preclude drug court programs from formally addressing HIV risk. This study examined the efficacy of a brief, three-session, computer-facilitated HIV prevention intervention in reducing HIV risk among adult felony drug court participants. Methods Two hundred participants were randomly assigned to an HIV intervention (n = 101) or attention control (n = 99) group. All clients attended judicial status hearings approximately every six weeks. At the first three status hearings following study entry, clients in the intervention group completed the computerized, interactive HIV risk reduction sessions while those in the control group viewed a series of educational life-skill videos of matched length. Outcomes included the rate of independently obtained HIV testing, engagement in high risk HIV-related behaviors, and rate of condom procurement from the research site at each session. Results Results indicated that participants who received the HIV intervention were significantly more likely to report having obtained HIV testing at some point during the study period than those in the control condition, although the effect was marginally significant when examined in a longitudinal model. In addition, they had higher rates of condom procurement. No group differences were found on rates of high-risk sexual behavior, and the low rate of injection drug reported precluded examination of high-risk drug-related behavior. Conclusions The study provides support for the feasibility and utility of delivering HIV prevention services to drug court clients using an efficient computer-facilitated program. PMID:26971228

  10. Carbon fiber counting. [aircraft structures

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A method was developed for characterizing the number and lengths of carbon fibers accidentally released by the burning of composite portions of civil aircraft structure in a jet fuel fire after an accident. Representative samplings of carbon fibers collected on transparent sticky film were counted from photographic enlargements with a computer aided technique which also provided fiber lengths.

  11. The Design of a Templated C++ Small Vector Class for Numerical Computing

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.

    2000-01-01

    We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.

  12. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  13. Increased Titin Compliance Reduced Length-Dependent Contraction and Slowed Cross-Bridge Kinetics in Skinned Myocardial Strips from Rbm (20ΔRRM) Mice.

    PubMed

    Pulcastro, Hannah C; Awinda, Peter O; Methawasin, Mei; Granzier, Henk; Dong, Wenji; Tanner, Bertrand C W

    2016-01-01

    Titin is a giant protein spanning from the Z-disk to the M-band of the cardiac sarcomere. In the I-band titin acts as a molecular spring, contributing to passive mechanical characteristics of the myocardium throughout a heartbeat. RNA Binding Motif Protein 20 (RBM20) is required for normal titin splicing, and its absence or altered function leads to greater expression of a very large, more compliant N2BA titin isoform in Rbm20 homozygous mice (Rbm20 (ΔRRM) ) compared to wild-type mice (WT) that almost exclusively express the stiffer N2B titin isoform. Prior studies using Rbm20 (ΔRRM) animals have shown that increased titin compliance compromises muscle ultrastructure and attenuates the Frank-Starling relationship. Although previous computational simulations of muscle contraction suggested that increasing compliance of the sarcomere slows the rate of tension development and prolongs cross-bridge attachment, none of the reported effects of Rbm20 (ΔRRM) on myocardial function have been attributed to changes in cross-bridge cycling kinetics. To test the relationship between increased sarcomere compliance and cross-bridge kinetics, we used stochastic length-perturbation analysis in Ca(2+)-activated, skinned papillary muscle strips from Rbm20 (ΔRRM) and WT mice. We found increasing titin compliance depressed maximal tension, decreased Ca(2+)-sensitivity of the tension-pCa relationship, and slowed myosin detachment rate in myocardium from Rbm20 (ΔRRM) vs. WT mice. As sarcomere length increased from 1.9 to 2.2 μm, length-dependent activation of contraction was eliminated in the Rbm20 (ΔRRM) myocardium, even though myosin MgADP release rate decreased ~20% to prolong strong cross-bridge binding at longer sarcomere length. These data suggest that increasing N2BA expression may alter cardiac performance in a length-dependent manner, showing greater deficits in tension production and slower cross-bridge kinetics at longer sarcomere length. This study also supports the idea that passive mechanical characteristics of the myocardium influence ensemble cross-bridge behavior and maintenance of tension generation throughout the sarcomere.

  14. 5 CFR 831.703 - Computation of annuities for part-time service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... during those periods of creditable service. Pre-April 7, 1986, average pay means the largest annual rate..., 1986, service is computed in accordance with 5 U.S.C. 8339 using the pre-April 7, 1986, average pay and... computed in accordance with 5 U.S.C. 8339 using the post-April 6, 1986, average pay and length of service...

  15. Estimation of the vortex length scale and intensity from two-dimensional samples

    NASA Technical Reports Server (NTRS)

    Reuss, D. L.; Cheng, W. P.

    1992-01-01

    A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.

  16. Laser velocimeter and total pressure measurements in circular-to-rectangular transition ducts

    NASA Technical Reports Server (NTRS)

    Patrick, William P.; Mccormick, Duane C.

    1988-01-01

    A comprehensive set of total pressure and three-component laser velocimetry (LV) data were obtained within two circular-to-rectangular transition ducts at low subsonic speeds. This set of reference data was acquired for use in identifying secondary flow mechanisms and for assessing the accuracy of computational procedures for calculating such flows. Data were obtained at the inlet and exit planes of an aspect ratio three duct having a length-to-diameter ratio of one (AR310) and an aspect ratio six duct having a length-to-diameter ratio of three (AR630). Each duct was unseparated throughout its transition section. It is therefore concluded that secondary flows can play an important part in the fluid dynamics of transition ducts and needs to be addressed in computational analysis. The strength of the secondary flows depends on both the aspect ratio and relative axial duct length.

  17. Combining four Monte Carlo estimators for radiation momentum deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbatsch, Todd J; Hykes, Joshua M

    2010-11-18

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the gain in FOM is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10-20% greater than any of the solomore » estimators FOM. In addition, the numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions.« less

  18. Study of mandible reconstruction using a fibula flap with application of additive manufacturing technology.

    PubMed

    Tsai, Ming-June; Wu, Ching-Tsai

    2014-05-06

    This study aimed to establish surgical guiding techniques for completing mandible lesion resection and reconstruction of the mandible defect area with fibula sections in one surgery by applying additive manufacturing technology, which can reduce the surgical duration and enhance the surgical accuracy and success rate. A computer assisted mandible reconstruction planning (CAMRP) program was used to calculate the optimal cutting length and number of fibula pieces and design the fixtures for mandible cutting, registration, and arrangement of the fibula segments. The mandible cutting and registering fixtures were then generated using an additive manufacturing system. The CAMRP calculated the optimal fibula cutting length and number of segments based on the location and length of the defective portion of the mandible. The mandible cutting jig was generated according to the boundary surface of the lesion resection on the mandible STL model. The fibular cutting fixture was based on the length of each segment, and the registered fixture was used to quickly arrange the fibula pieces into the shape of the defect area. In this study, the mandibular lesion was reconstructed using registered fibular sections in one step, and the method is very easy to perform. The application of additive manufacturing technology provided customized models and the cutting fixtures and registered fixtures, which can improve the efficiency of clinical application. This study showed that the cutting fixture helped to rapidly complete lesion resection and fibula cutting, and the registered fixture enabled arrangement of the fibula pieces and allowed completion of the mandible reconstruction in a timely manner. Our method can overcome the disadvantages of traditional surgery, which requires a long and different course of treatment and is liable to cause error. With the help of optimal cutting planning by the CAMRP and the 3D printed mandible resection jig and fibula cutting fixture, this all-in-one process of mandible reconstruction furnishes many benefits in this field by enhancing the accuracy of surgery, shortening the operation duration, reducing the surgical risk, and resulting in a better mandible appearance of the patients after surgery.

  19. Study of mandible reconstruction using a fibula flap with application of additive manufacturing technology

    PubMed Central

    2014-01-01

    Background This study aimed to establish surgical guiding techniques for completing mandible lesion resection and reconstruction of the mandible defect area with fibula sections in one surgery by applying additive manufacturing technology, which can reduce the surgical duration and enhance the surgical accuracy and success rate. Methods A computer assisted mandible reconstruction planning (CAMRP) program was used to calculate the optimal cutting length and number of fibula pieces and design the fixtures for mandible cutting, registration, and arrangement of the fibula segments. The mandible cutting and registering fixtures were then generated using an additive manufacturing system. The CAMRP calculated the optimal fibula cutting length and number of segments based on the location and length of the defective portion of the mandible. The mandible cutting jig was generated according to the boundary surface of the lesion resection on the mandible STL model. The fibular cutting fixture was based on the length of each segment, and the registered fixture was used to quickly arrange the fibula pieces into the shape of the defect area. In this study, the mandibular lesion was reconstructed using registered fibular sections in one step, and the method is very easy to perform. Results and conclusion The application of additive manufacturing technology provided customized models and the cutting fixtures and registered fixtures, which can improve the efficiency of clinical application. This study showed that the cutting fixture helped to rapidly complete lesion resection and fibula cutting, and the registered fixture enabled arrangement of the fibula pieces and allowed completion of the mandible reconstruction in a timely manner. Our method can overcome the disadvantages of traditional surgery, which requires a long and different course of treatment and is liable to cause error. With the help of optimal cutting planning by the CAMRP and the 3D printed mandible resection jig and fibula cutting fixture, this all-in-one process of mandible reconstruction furnishes many benefits in this field by enhancing the accuracy of surgery, shortening the operation duration, reducing the surgical risk, and resulting in a better mandible appearance of the patients after surgery. PMID:24885749

  20. Solar potential scaling and the urban road network topology

    NASA Astrophysics Data System (ADS)

    Najem, Sara

    2017-01-01

    We explore the scaling of cities' solar potentials with their number of buildings and reveal a latent dependence between the solar potential and the length of the corresponding city's road network. This scaling is shown to be valid at the grid and block levels and is attributed to a common street length distribution. Additionally, we compute the buildings' solar potential correlation function and length in order to determine the set of critical exponents typifying the urban solar potential universality class.

  1. A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics

    PubMed Central

    Steinhauser, Martin O.; Hiermaier, Stefan

    2009-01-01

    This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

  2. Relations between Some Characteristic Lengths in a Triangle

    ERIC Educational Resources Information Center

    Koepf, Wolfram; Brede, Markus

    2005-01-01

    The paper's aim is to note a remarkable (and apparently unknown) relation for right triangles, its generalisation to arbitrary triangles and the possibility to derive these and some related relations by elimination using Groebner basis computations with a modern computer algebra system. (Contains 9 figures.)

  3. Computational Investigations on the Effects of Gurney Flap on Airfoil Aerodynamics.

    PubMed

    Jain, Shubham; Sitaram, Nekkanti; Krishnaswamy, Sriram

    2015-01-01

    The present study comprises steady state, two-dimensional computational investigations performed on NACA 0012 airfoil to analyze the effect of Gurney flap (GF) on airfoil aerodynamics using k-ε RNG turbulence model of FLUENT. Airfoil with GF is analyzed for six different heights from 0.5% to 4% of the chord length, seven positions from 0% to 20% of the chord length from the trailing edge, and seven mounting angles from 30° to 120° with the chord. Computed values of lift and drag coefficients with angle of attack are compared with experimental values and good agreement is found at low angles of attack. In addition static pressure distribution on the airfoil surface and pathlines and turbulence intensities near the trailing edge are present. From the computational investigation, it is recommended that Gurney flaps with a height of 1.5% chord be installed perpendicular to chord and as close to the trailing edge as possible to obtain maximum lift enhancement with minimum drag penalty.

  4. Computational Investigations on the Effects of Gurney Flap on Airfoil Aerodynamics

    PubMed Central

    Jain, Shubham; Sitaram, Nekkanti; Krishnaswamy, Sriram

    2015-01-01

    The present study comprises steady state, two-dimensional computational investigations performed on NACA 0012 airfoil to analyze the effect of Gurney flap (GF) on airfoil aerodynamics using k-ε RNG turbulence model of FLUENT. Airfoil with GF is analyzed for six different heights from 0.5% to 4% of the chord length, seven positions from 0% to 20% of the chord length from the trailing edge, and seven mounting angles from 30° to 120° with the chord. Computed values of lift and drag coefficients with angle of attack are compared with experimental values and good agreement is found at low angles of attack. In addition static pressure distribution on the airfoil surface and pathlines and turbulence intensities near the trailing edge are present. From the computational investigation, it is recommended that Gurney flaps with a height of 1.5% chord be installed perpendicular to chord and as close to the trailing edge as possible to obtain maximum lift enhancement with minimum drag penalty. PMID:27347517

  5. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  6. A shock wave capability for the improved Two-Dimensional Kinetics (TDK) computer program

    NASA Technical Reports Server (NTRS)

    Nickerson, G. R.; Dang, L. D.

    1984-01-01

    The Two Dimensional Kinetics (TDK) computer program is a primary tool in applying the JANNAF liquid rocket engine performance prediction procedures. The purpose of this contract has been to improve the TDK computer program so that it can be applied to rocket engine designs of advanced type. In particular, future orbit transfer vehicles (OTV) will require rocket engines that operate at high expansion ratio, i.e., in excess of 200:1. Because only a limited length is available in the space shuttle bay, it is possible that OTV nozzles will be designed with both relatively short length and high expansion ratio. In this case, a shock wave may be present in the flow. The TDK computer program was modified to include the simulation of shock waves in the supersonic nozzle flow field. The shocks induced by the wall contour can produce strong perturbations of the flow, affecting downstream conditions which need to be considered for thrust chamber performance calculations.

  7. Dynamic Multiple Work Stealing Strategy for Flexible Load Balancing

    NASA Astrophysics Data System (ADS)

    Adnan; Sato, Mitsuhisa

    Lazy-task creation is an efficient method of overcoming the overhead of the grain-size problem in parallel computing. Work stealing is an effective load balancing strategy for parallel computing. In this paper, we present dynamic work stealing strategies in a lazy-task creation technique for efficient fine-grain task scheduling. The basic idea is to control load balancing granularity depending on the number of task parents in a stack. The dynamic-length strategy of work stealing uses run-time information, which is information on the load of the victim, to determine the number of tasks that a thief is allowed to steal. We compare it with the bottommost first work stealing strategy used in StackThread/MP, and the fixed-length strategy of work stealing, where a thief requests to steal a fixed number of tasks, as well as other multithreaded frameworks such as Cilk and OpenMP task implementations. The experiments show that the dynamic-length strategy of work stealing performs well in irregular workloads such as in UTS benchmarks, as well as in regular workloads such as Fibonacci, Strassen's matrix multiplication, FFT, and Sparse-LU factorization. The dynamic-length strategy works better than the fixed-length strategy because it is more flexible than the latter; this strategy can avoid load imbalance due to overstealing.

  8. Turbulent flows over superhydrophobic surfaces with shear-dependent slip length

    NASA Astrophysics Data System (ADS)

    Khosh Aghdam, Sohrab; Seddighi, Mehdi; Ricco, Pierre

    2015-11-01

    Motivated by recent experimental evidence, shear-dependent slip length superhydrophobic surfaces are studied. Lyapunov stability analysis is applied in a 3D turbulent channel flow and extended to the shear-dependent slip-length case. The feedback law extracted is recognized for the first time to coincide with the constant-slip-length model widely used in simulations of hydrophobic surfaces. The condition for the slip parameters is found to be consistent with the experimental data and with values from DNS. The theoretical approach by Fukagata (PoF 18.5: 051703) is employed to model the drag-reduction effect engendered by the shear-dependent slip-length surfaces. The estimated drag-reduction values are in very good agreement with our DNS data. For slip parameters and flow conditions which are potentially realizable in the lab, the maximum computed drag reduction reaches 50%. The power spent by the turbulent flow on the walls is computed, thereby recognizing the hydrophobic surfaces as a passive-absorbing drag-reduction method, as opposed to geometrically-modifying techniques that do not consume energy, e.g. riblets, hence named passive-neutral. The flow is investigated by visualizations, statistical analysis of vorticity and strain rates, and quadrants of the Reynolds stresses. Part of this work was funded by Airbus Group. Simulations were performed on the ARCHER Supercomputer (UKTC Grant).

  9. Microwave transmission efficiency and simulations of electron plasma in ELTRAP device

    NASA Astrophysics Data System (ADS)

    Ikram, M.; Mushtaq, A.; Ali, S.

    2017-11-01

    A Thomson backscattering experiment has been performed in a Penning-Malmberg device ELTRAP. To estimate the minimum sensitivity of diagnostics, we have computed the signal to noise ratio and found that the present bunch has a number density of 4.3 × 108 cm-3, which is three orders of magnitude less than the desired density of 1011 cm-3. To increase the signal level from the RF studies to the GHz range, the transmission efficiency from the rectangular waveguide orthogonally coupled to a prototype circular waveguide was experimentally analyzed on a test-bench. It is observed that the lengths of waveguides play an important role in the transmission efficiency and return loss. When the length of the optimum rectangular waveguide (>2 λg = 31 cm) is reduced to 7 cm, due to geometrical constraints of the ELTRAP device, consequently, the transmission efficiency is also reduced and shifts away from the maximum 3 GHz operating frequency. The useful frequency band is then reduced with the increasing length of the prototype circular waveguide (102 cm). Using the electromagnetic Particle-In-Cell simulations involving the electron cyclotron resonance heating (ECRH), we have utilized a magnetic field of 0.1 T resonating with 2.8 GHz RF drive during each time step (1 ps) having the power level of 0.04 V to the middle and to the end of the trap. A more efficient increase in the radial and azimuthal temperature profiles is observed as compared to the axial temperature profile. The reason is the use of ECRH to heat electrons in cyclotron motion, which is completely kinetic and magnetron motion which is almost entirely potential based. The axial motion interchanges in between the kinetic and potential with a slight enhancement in axial motion to maintain the total canonical angular momentum conserved. The temperature profile of the confined electron plasma increases with the variation of densities from 5 × 107 m-3 to 1012 m-3. The major heating effect occurs when the RF power is injected from the position close to one end with respect to the middle position of the trap.

  10. Vortex Generators in a Two-Dimensional, External-Compression Supersonic Inlet

    NASA Technical Reports Server (NTRS)

    Baydar, Ezgihan; Lu, Frank K.; Slater, John W.

    2016-01-01

    Vortex generators within a two-dimensional, external-compression supersonic inlet for Mach 1.6 were investigated to determine their ability to increase total pressure recovery, reduce total pressure distortion, and improve the boundary layer. The vortex generators studied included vanes and ramps. The geometric factors of the vortex generators studied included height, length, spacing, and positions upstream and downstream of the inlet terminal shock. The flow through the inlet was simulated through the computational solution of the steady-state Reynolds-averaged Navier-Stokes equations on multi-block, structured grids. The vortex generators were simulated by either gridding the geometry of the vortex generators or modeling the vortices generated by the vortex generators. The inlet performance was characterized by the inlet total pressure recovery, total pressure distortion, and incompressible shape factor of the boundary-layer at the engine face. The results suggested that downstream vanes reduced the distortion and improved the boundary layer. The height of the vortex generators had the greatest effect of the geometric factors.

  11. Highly multireferenced arynes studied with large active spaces using two-electron reduced density matrices.

    PubMed

    Greenman, Loren; Mazziotti, David A

    2009-05-14

    Using the active-space two-electron reduced density matrix (2-RDM) method, which scales polynomially with the size of the active space [G. Gidofalvi and D. A. Mazziotti, J. Chem. Phys. 129, 134108 (2008)], we were able to use active spaces as large as 24 electrons in 24 orbitals in computing the ground-state energies and properties of highly multireferenced arynes. Because the conventional complete-active-space self-consistent-field (CASSCF) method scales exponentially with the size of the active space, its application to arynes was mainly limited to active spaces of 12 electrons in 12 orbitals. For these smaller active spaces the active-space 2-RDM method accurately reproduces the results of CASSCF. However, we show that the larger active spaces are necessary for describing changes in energies and properties with aryne chain length such as the emergence of polyradical character. Furthermore, the addition of further electron correlation by multireference perturbation theory is demonstrated to be inadequate for removing the limitations of the smaller active spaces.

  12. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  13. Does improved access to diagnostic imaging results reduce hospital length of stay? A retrospective study

    PubMed Central

    2010-01-01

    Background One year after the introduction of Information and Communication Technology (ICT) to support diagnostic imaging at our hospital, clinicians had faster and better access to radiology reports and images; direct access to Computed Tomography (CT) reports in the Electronic Medical Record (EMR) was particularly popular. The objective of this study was to determine whether improvements in radiology reporting and clinical access to diagnostic imaging information one year after the ICT introduction were associated with a reduction in the length of patients' hospital stays (LOS). Methods Data describing hospital stays and diagnostic imaging were collected retrospectively from the EMR during periods of equal duration before and one year after the introduction of ICT. The post-ICT period was chosen because of the documented improvement in clinical access to radiology results during that period. The data set was randomly split into an exploratory part used to establish the hypotheses, and a confirmatory part. The data was used to compare the pre-ICT and post-ICT status, but also to compare differences between groups. Results There was no general reduction in LOS one year after ICT introduction. However, there was a 25% reduction for one group - patients with CT scans. This group was heterogeneous, covering 445 different primary discharge diagnoses. Analyses of subgroups were performed to reduce the impact of this divergence. Conclusion Our results did not indicate that improved access to radiology results reduced the patients' LOS. There was, however, a significant reduction in LOS for patients undergoing CT scans. Given the clinicians' interest in CT reports and the results of the subgroup analyses, it is likely that improved access to CT reports contributed to this reduction. PMID:20819224

  14. Junior doctor strike model of care: Reduced access block and predominant Fellow of the Australasian College for Emergency Medicine staffing improve emergency department performance.

    PubMed

    Thornton, Vanessa; Hazell, Wayne

    2008-10-01

    To describe the response and analyse ED performance during a 5-day junior doctor strike. Data were collected via the patient information management computer system. Key performance indicators included percentage seen within maximum waiting times per triage category (TC), ED length of stay, emergency medicine patients who did not wait to be seen, hospital bed occupancy and access block percentage. Comparisons were made for the same 5 days before the strike (BS), during the strike (S) and after the strike. Total doctor's shifts BS were 78.66 with 25% of these shifts being Fellow of the Australasian College for Emergency Medicine (FACEM) shifts. FACEM shifts were more common during the S period at 75% (P < 0.001). Total attendances (BS 631 vs S 596, P = 0.22) and TC percentages (P-values for TC 1, 2, 3, 4, 5, respectively, 1.0, 0.55, 0.88, 0.97, 0.46) in the BS, S and after-the-strike periods were not significantly different. Despite fewer total doctor shifts, the FACEM predominant model of care during the strike resulted in better percentages seen within the maximum waiting times for TC3 (66%), TC4 (78%) and TC5 (86%) (all P < 0.001). There was a reduction in patients who did not wait to be seen (28 BS vs 5 S, P < 0.001), ED length of stay (admissions: BS 451 min vs S 258 min, P < 0.001; discharges: BS 233 min vs S 144 min, P < 0.02) and referrals to inpatient services (P = 0.02). This occurred with reduced bed point occupancy of 66% and a consequent reduction in access block. FACEM staffing and reduced access block were significant factors in improved ED performance.

  15. Algolcam: Low Cost Sky Scanning with Modern Technology

    NASA Astrophysics Data System (ADS)

    Connors, Martin; Bolton, Dempsey; Doktor, Ian

    2016-01-01

    Low cost DSLR cameras running under computer control offer good sensitivity, high resolution, small size, and the convenience of digital image handling. Recent developments in small single board computers have pushed the performance to cost and size ratio to unprecedented values, with the further advantage of very low power consumption. Yet a third technological development is motor control electronics which is easily integrated with the computer to make an automated mount, which in our case is custom built, but with similar mounts available commercially. Testing of such a system under a clear plastic dome at our auroral observatory was so successful that we have developed a weatherproof housing allowing use during the long, cold, and clear winter nights at northerly latitudes in Canada. The main advantage of this housing should be improved image quality as compared to operation through clear plastic. We have improved the driving software to include the ability to self-calibrate pointing through the web API of astrometry.net, and data can be reduced automatically through command line use of the Muniwin program. The mount offers slew in declination and RA, and tracking at sidereal or other rates in RA. Our previous tests with a Nikon D5100 with standard lenses in the focal length range 50-200 mm, operating at f/4 to f/5, allowed detection of 12th magnitude stars with 30 second exposure under very dark skies. At 85 mm focal length, a field of 15° by 10° is imaged with 4928 by 3264 color pixels, and we have adopted an 85 mm fixed focal length f/1.4 lens (as used by Project Panoptes), which we expect will give a limited magnitude approaching 15. With a large field of view, deep limiting magnitude, low cost, and ease of construction and use, we feel that the Algolcam offers great possibilities in monitoring and finding changes in the sky. We have already applied it to variable star light curves, and with a suitable pipeline for detection of moving or varying objects, it offers great potential for analysis and discovery. The use of low cost cutting edge technology makes Algolcam particularly interesting for enhancing the advanced undergraduate learning experience in astronomy.

  16. Self-consistent clustering analysis: an efficient multiscale scheme for inelastic heterogeneous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Z.; Bessa, M. A.; Liu, W.K.

    A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less

  17. A Motor-Driven Mechanism for Cell-Length Sensing

    PubMed Central

    Rishal, Ida; Kam, Naaman; Perry, Rotem Ben-Tov; Shinder, Vera; Fisher, Elizabeth M.C.; Schiavo, Giampietro; Fainzilber, Mike

    2012-01-01

    Summary Size homeostasis is fundamental in cell biology, but it is not clear how large cells such as neurons can assess their own size or length. We examined a role for molecular motors in intracellular length sensing. Computational simulations suggest that spatial information can be encoded by the frequency of an oscillating retrograde signal arising from a composite negative feedback loop between bidirectional motor-dependent signals. The model predicts that decreasing either or both anterograde or retrograde signals should increase cell length, and this prediction was confirmed upon application of siRNAs for specific kinesin and/or dynein heavy chains in adult sensory neurons. Heterozygous dynein heavy chain 1 mutant sensory neurons also exhibited increased lengths both in vitro and during embryonic development. Moreover, similar length increases were observed in mouse embryonic fibroblasts upon partial downregulation of dynein heavy chain 1. Thus, molecular motors critically influence cell-length sensing and growth control. PMID:22773964

  18. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  19. Genetic and transcriptomic dissection of the fiber length trait using a cotton (Gossypium hirsutum L.) MAGIC population.

    USDA-ARS?s Scientific Manuscript database

    Cotton fiber length is a key determinant of fiber quality for the textile industry. Improving cotton fiber length without reducing yield is one of the major goals for cotton breeding. However, genetic improvement of cotton fiber length by breeding has been a challenge due to narrow genetic diversit...

  20. Implant preloading in extension reduces spring length change in dynamic intraligamentary stabilization: a biomechanical study on passive kinematics of the knee.

    PubMed

    Häberli, Janosch; Voumard, Benjamin; Kösters, Clemens; Delfosse, Daniel; Henle, Philipp; Eggli, Stefan; Zysset, Philippe

    2018-06-01

    Dynamic intraligamentary stabilization (DIS) is a primary repair technique for acute anterior cruciate ligament (ACL) tears. For internal bracing of the sutured ACL, a metal spring with 8 mm maximum length change is preloaded with 60-80 N and fixed to a high-strength polyethylene braid. The bulky tibial hardware results in bone loss and may cause local discomfort with the necessity of hardware removal. The technique has been previously investigated biomechanically; however, the amount of spring shortening during movement of the knee joint is unknown. Spring shortening is a crucial measure, because it defines the necessary dimensions of the spring and, therefore, the overall size of the implant. Seven Thiel-fixated human cadaveric knee joints were subjected to passive range of motion (flexion/extension, internal/external rotation in 90° flexion, and varus/valgus stress in 0° and 20° flexion) and stability tests (Lachman/KT-1000 testing in 0°, 15°, 30°, 60°, and 90° flexion) in the ACL-intact, ACL-transected, and DIS-repaired state. Kinematic data of femur, tibia, and implant spring were recorded with an optical measurement system (Optotrak) and the positions of the bone tunnels were assessed by computed tomography. Length change of bone tunnel distance as a surrogate for spring shortening was then computed from kinematic data. Tunnel positioning in a circular zone with r = 5 mm was simulated to account for surgical precision and its influence on length change was assessed. Over all range of motion and stability tests, spring shortening was highest (5.0 ± 0.2 mm) during varus stress in 0° knee flexion. During flexion/extension, spring shortening was always highest in full extension (3.8 ± 0.3 mm) for all specimens and all simulations of bone tunnels. Tunnel distance shortening was highest (0.15 mm/°) for posterior femoral and posterior tibial tunnel positioning and lowest (0.03 mm/°) for anterior femoral and anterior tibial tunnel positioning. During passive flexion/extension, the highest spring shortening was consistently measured in full extension with a continuous decrease towards flexion. If preloading of the spring is performed in extension, the spring can be downsized to incorporate a maximum length change of 5 mm resulting in a smaller implant with less bone sacrifice and, therefore, improved conditions in case of revision surgery.

  1. PCB153 reduces telomerase activity and telomere length in immortalized human skin keratinocytes (HaCaT) but not in human foreskin keratinocytes (NFK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senthilkumar, P.K.; Robertson, L.W.; Department of Occupational and Environmental Health, The University of Iowa, Iowa City, IA

    Polychlorinated biphenyls (PCBs), ubiquitous environmental pollutants, are characterized by long term-persistence in the environment, bioaccumulation, and biomagnification in the food chain. Exposure to PCBs may cause various diseases, affecting many cellular processes. Deregulation of the telomerase and the telomere complex leads to several biological disorders. We investigated the hypothesis that PCB153 modulates telomerase activity, telomeres and reactive oxygen species resulting in the deregulation of cell growth. Exponentially growing immortal human skin keratinocytes (HaCaT) and normal human foreskin keratinocytes (NFK) were incubated with PCB153 for 48 and 24 days, respectively, and telomerase activity, telomere length, superoxide level, cell growth, and cellmore » cycle distribution were determined. In HaCaT cells exposure to PCB153 significantly reduced telomerase activity, telomere length, cell growth and increased intracellular superoxide levels from day 6 to day 48, suggesting that superoxide may be one of the factors regulating telomerase activity, telomere length and cell growth compared to untreated control cells. Results with NFK cells showed no shortening of telomere length but reduced cell growth and increased superoxide levels in PCB153-treated cells compared to untreated controls. As expected, basal levels of telomerase activity were almost undetectable, which made a quantitative comparison of treated and control groups impossible. The significant down regulation of telomerase activity and reduction of telomere length by PCB153 in HaCaT cells suggest that any cell type with significant telomerase activity, like stem cells, may be at risk of premature telomere shortening with potential adverse health effects for the affected organism. -- Highlights: ► Human immortal (HaCaT) and primary (NFK) keratinocytes were exposed to PCB153. ► PCB153 significantly reduced telomerase activity and telomere length in HaCaT. ► No effect on telomere length and telomerase activity was found in NFK. ► Increased intracellular superoxide levels and reduced cell growth was seen in both. ► PCB153 may damage telomerase expressing cells like stem cells.« less

  2. Reduction in patient burdens with graphical computerized adaptive testing on the ADL scale: tool development and simulation

    PubMed Central

    Chien, Tsair-Wei; Wu, Hing-Man; Wang, Weng-Chung; Castillo, Roberto Vasquez; Chou, Willy

    2009-01-01

    Background The aim of this study was to verify the effectiveness and efficacy of saving time and reducing burden for patients, nurses, and even occupational therapists through computer adaptive testing (CAT). Methods Based on an item bank of the Barthel Index (BI) and the Frenchay Activities Index (FAI) for assessing comprehensive activities of daily living (ADL) function in stroke patients, we developed a visual basic application (VBA)-Excel CAT module, and (1) investigated whether the averaged test length via CAT is shorter than that of the traditional all-item-answered non-adaptive testing (NAT) approach through simulation, (2) illustrated the CAT multimedia on a tablet PC showing data collection and response errors of ADL clinical functional measures in stroke patients, and (3) demonstrated the quality control of endorsing scale with fit statistics to detect responding errors, which will be further immediately reconfirmed by technicians once patient ends the CAT assessment. Results The results show that endorsed items could be shorter on CAT (M = 13.42) than on NAT (M = 23) at 41.64% efficiency in test length. However, averaged ability estimations reveal insignificant differences between CAT and NAT. Conclusion This study found that mobile nursing services, placed at the bedsides of patients could, through the programmed VBA-Excel CAT module, reduce the burden to patients and save time, more so than the traditional NAT paper-and-pencil testing appraisals. PMID:19416521

  3. A Scheme for the Evaluation of Electron Delocalization and Conjugation Efficiency in Linearly π-Conjugated Systems.

    PubMed

    Bruschi, Maurizio; Limacher, Peter A; Hutter, Jürg; Lüthi, Hans Peter

    2009-03-10

    In this study, we present a scheme for the evaluation of electron delocalization and conjugation efficiency in lineraly π-conjugated systems. The scheme, based on the natural bond orbital theory, allows monitoring the evolution of electron delocalization along an extended conjugation path as well as its response to chemical modification. The scheme presented is evaluated and illustrated by means of a computational investigation of π-conjugation in all-trans polyacetylene [PA; H(-CH═CH)n-H], polydiacetylene [PDA, H(-C≡C-CH═CH)n-H], and polytriacetylene [PTA, H(-C≡C-CH═CH-C≡C)n-H] with up to 180 carbon atoms, all related by the number of ethynyl units incorporated in the chain. We are able to show that for short oligomers the incorporation of ethynyl spacers into the PA chain increases the π-delocalization energy, but, on the other hand, reduces the efficiency with which π-electron delocalization is promoted along the backbone. This explains the generally shorter effective conjugation lengths observed for the properties of the polyeneynes (PDA and PTA) relative to the polyenes (PA). It will also be shown that the reduced conjugation efficiency, within the NBO-based model presented in this work, can be related to the orbital interaction pattern along the π-conjugated chain. We will show that the orbital interaction energy pattern is characteristic for the type and the length of the backbone and may therefore serve as a descriptor for linearly π-conjugated chains.

  4. Reduced-order prediction of rogue waves in two-dimensional deep-water waves

    NASA Astrophysics Data System (ADS)

    Sapsis, Themistoklis; Farazmand, Mohammad

    2017-11-01

    We consider the problem of large wave prediction in two-dimensional water waves. Such waves form due to the synergistic effect of dispersive mixing of smaller wave groups and the action of localized nonlinear wave interactions that leads to focusing. Instead of a direct simulation approach, we rely on the decomposition of the wave field into a discrete set of localized wave groups with optimal length scales and amplitudes. Due to the short-term character of the prediction, these wave groups do not interact and therefore their dynamics can be characterized individually. Using direct numerical simulations of the governing envelope equations we precompute the expected maximum elevation for each of those wave groups. The combination of the wave field decomposition algorithm, which provides information about the statistics of the system, and the precomputed map for the expected wave group elevation, which encodes dynamical information, allows (i) for understanding of how the probability of occurrence of rogue waves changes as the spectrum parameters vary, (ii) the computation of a critical length scale characterizing wave groups with high probability of evolving to rogue waves, and (iii) the formulation of a robust and parsimonious reduced-order prediction scheme for large waves. T.S. has been supported through the ONR Grants N00014-14-1-0520 and N00014-15-1-2381 and the AFOSR Grant FA9550-16-1-0231. M.F. has been supported through the second Grant.

  5. A thixotropic effect in contracting rabbit psoas muscle: prior movement reduces the initial tension response to stretch

    PubMed Central

    Campbell, Kenneth S; Moss, Richard L

    2000-01-01

    Paired ramp stretches and releases (‘triangular length changes’, typically 0.04 ± 0.09L0 s−1; mean ±s.e.m.) were imposed on permeabilised rabbit psoas fibre segments under sarcomere length control. In actively contracting fibres, the tension response to stretch was biphasic; tension rose more rapidly during the first 0.005L0 of the imposed stretch than thereafter. Tension also dropped in a biphasic manner during shortening, and at the end of the length change was reduced below the steady state. If a second triangular length change was imposed shortly after the first, tension rose less sharply during the initial phase of lengthening, i.e. the stiffness of the muscle during the initial phase of the response was reduced in the second stretch. This is a thixotropic effect. If a third triangular length change was imposed on the muscle, the response was the same as that to the second. The time required to recover the original tension response was measured by varying the interval between triangular length changes. Recovery to steady state occurred at a rate of ∼1 s−1. The stiffness of the muscle during the initial phase of the response scaled with the developed tension in pCa (=−log10[Ca2+]) solutions ranging from 6.3 (minimal activation) to 4.5 (saturating effect). The relative thixotropic reduction in stiffness measured using paired length changes was independent of the pCa of the activating solution. The thixotropic behaviour of contracting skeletal muscle can be explained by a cross-bridge model of muscle contraction in which the number of attached cross-bridges is temporarily reduced following an imposed movement. PMID:10835052

  6. Top down and bottom up engineering of bone.

    PubMed

    Knothe Tate, Melissa L

    2011-01-11

    The goal of this retrospective article is to place the body of my lab's multiscale mechanobiology work in context of top-down and bottom-up engineering of bone. We have used biosystems engineering, computational modeling and novel experimental approaches to understand bone physiology, in health and disease, and across time (in utero, postnatal growth, maturity, aging and death, as well as evolution) and length scales (a single bone like a femur, m; a sample of bone tissue, mm-cm; a cell and its local environment, μm; down to the length scale of the cell's own skeleton, the cytoskeleton, nm). First we introduce the concept of flow in bone and the three calibers of porosity through which fluid flows. Then we describe, in the context of organ-tissue, tissue-cell and cell-molecule length scales, both multiscale computational models and experimental methods to predict flow in bone and to understand the flow of fluid as a means to deliver chemical and mechanical cues in bone. Addressing a number of studies in the context of multiple length and time scales, the importance of appropriate boundary conditions, site specific material parameters, permeability measures and even micro-nanoanatomically correct geometries are discussed in context of model predictions and their value for understanding multiscale mechanobiology of bone. Insights from these multiscale computational modeling and experimental methods are providing us with a means to predict, engineer and manufacture bone tissue in the laboratory and in the human body. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. SYNTOR: A synthetic daily weather generator version 3.4 user manual

    USDA-ARS?s Scientific Manuscript database

    Existing records of weather observations are often too short to conduct long duration hydrologic and environmental computer simulations. A computer program can be used to generate synthetic weather data to increase the length of existing weather records. SYNTOR, which stands for SYNthetic weather g...

  8. Spectral algorithms for multiple scale localized eigenfunctions in infinitely long, slightly bent quantum waveguides

    NASA Astrophysics Data System (ADS)

    Boyd, John P.; Amore, Paolo; Fernández, Francisco M.

    2018-03-01

    A "bent waveguide" in the sense used here is a small perturbation of a two-dimensional rectangular strip which is infinitely long in the down-channel direction and has a finite, constant width in the cross-channel coordinate. The goal is to calculate the smallest ("ground state") eigenvalue of the stationary Schrödinger equation which here is a two-dimensional Helmholtz equation, ψxx +ψyy + Eψ = 0 where E is the eigenvalue and homogeneous Dirichlet boundary conditions are imposed on the walls of the waveguide. Perturbation theory gives a good description when the "bending strength" parameter ɛ is small as described in our previous article (Amore et al., 2017) and other works cited therein. However, such series are asymptotic, and it is often impractical to calculate more than a handful of terms. It is therefore useful to develop numerical methods for the perturbed strip to cover intermediate ɛ where the perturbation series may be inaccurate and also to check the pertubation expansion when ɛ is small. The perturbation-induced change-in-eigenvalue, δ ≡ E(ɛ) - E(0) , is O(ɛ2) . We show that the computation becomes very challenging as ɛ → 0 because (i) the ground state eigenfunction varies on both O(1) and O(1 / ɛ) length scales and (ii) high accuracy is needed to compute several correct digits in δ, which is itself small compared to the eigenvalue E. The multiple length scales are not geographically separate, but rather are inextricably commingled in the neighborhood of the boundary deformation. We show that coordinate mapping and immersed boundary strategies both reduce the computational domain to the uniform strip, allowing application of pseudospectral methods on tensor product grids with tensor product basis functions. We compared different basis sets; Chebyshev polynomials are best in the cross-channel direction. However, sine functions generate rather accurate analytical approximations with just a single basis function. In the down-channel coordinate, X ∈ [ - ∞ , ∞ ] , Fourier domain truncation using the change of coordinate X = sinh(Lt) is considerably more efficient than rational Chebyshev functions TBn(X ; L) . All the spectral methods, however, yielded the required accuracy on a desktop computer.

  9. Endoscopic and percutaneous drainage of symptomatic walled-off pancreatic necrosis reduces hospital stay and radiographic resources.

    PubMed

    Gluck, Michael; Ross, Andrew; Irani, Shayan; Lin, Otto; Hauptmann, Ellen; Siegal, Justin; Fotoohi, Mehran; Crane, Robert; Robinson, David; Kozarek, Richard A

    2010-12-01

    Walled-off pancreatic necrosis (WOPN), a complication of severe acute pancreatitis (SAP), can become infected, obstruct adjacent structures, and result in clinical deterioration of patients. Patients with WOPN have prolonged hospitalizations, needing multiple radiologic and medical interventions. We compared an established treatment of WOPN, standard percutaneous drainage (SPD), with combined modality therapy (CMT), in which endoscopic transenteric stents were added to a regimen of percutaneous drains. Symptomatic patients with WOPN between January 2006 and August 2009 were treated with SPD (n = 43, 28 male) or CMT (n = 23, 17 male) and compared by disease severity, length of hospitalization, duration of drainage, complications, and number of radiologic and endoscopic procedures. Patient age (59 vs 54 years), sex (77% vs 58% male), computed tomography severity index (8.0 vs 7.2), number of endoscopic retrograde cholangiopancreatographies (2.0 vs 2.6), and percentage with disconnected pancreatic ducts (50% vs 46%) were equivalent in the CMT and SPD arms, respectively. Patients undergoing CMT had significantly decreased length of hospitalization (26 vs 55 days, P < .0026), duration of external drainage (83.9 vs 189 days, P < .002), number of computed tomography scans (8.95 vs 14.3, P < .002), and drain studies (6.5 vs 13, P < .0001). Patients in the SPD arm had more complications. For patients with symptomatic WOPN, CMT provided a more effective and safer management technique, resulting in shorter hospitalizations and fewer radiologic procedures than SPD. Copyright © 2010 AGA Institute. Published by Elsevier Inc. All rights reserved.

  10. Numerical Modelling of the Sound Fields in Urban Streets with Diffusely Reflecting Boundaries

    NASA Astrophysics Data System (ADS)

    KANG, J.

    2002-12-01

    A radiosity-based theoretical/computer model has been developed to study the fundamental characteristics of the sound fields in urban streets resulting from diffusely reflecting boundaries, and to investigate the effectiveness of architectural changes and urban design options on noise reduction. Comparison between the theoretical prediction and the measurement in a scale model of an urban street shows very good agreement. Computations using the model in hypothetical rectangular streets demonstrate that though the boundaries are diffusely reflective, the sound attenuation along the length is significant, typically at 20-30 dB/100 m. The sound distribution in a cross-section is generally even unless the cross-section is very close to the source. In terms of the effectiveness of architectural changes and urban design options, it has been shown that over 2-4 dB extra attenuation can be obtained either by increasing boundary absorption evenly or by adding absorbent patches on the façades or the ground. Reducing building height has a similar effect. A gap between buildings can provide about 2-3 dB extra sound attenuation, especially in the vicinity of the gap. The effectiveness of air absorption on increasing sound attenuation along the length could be 3-9 dB at high frequencies. If a treatment is effective with a single source, it is also effective with multiple sources. In addition, it has been demonstrated that if the façades in a street are diffusely reflective, the sound field of the street does not change significantly whether the ground is diffusely or geometrically reflective.

  11. EFFECTS OF FLUID AND COMPUTED TOMOGRAPHIC TECHNICAL FACTORS ON CONSPICUITY OF CANINE AND FELINE NASAL TURBINATES

    PubMed Central

    Uosyte, Raimonda; Shaw, Darren J; Gunn-Moore, Danielle A; Fraga-Manteiga, Eduardo; Schwarz, Tobias

    2015-01-01

    Turbinate destruction is an important diagnostic criterion in canine and feline nasal computed tomography (CT). However decreased turbinate visibility may also be caused by technical CT settings and nasal fluid. The purpose of this experimental, crossover study was to determine whether fluid reduces conspicuity of canine and feline nasal turbinates in CT and if so, whether CT settings can maximize conspicuity. Three canine and three feline cadaver heads were used. Nasal slabs were CT-scanned before and after submerging them in a water bath; using sequential, helical, and ultrahigh resolution modes; with images in low, medium, and high frequency image reconstruction kernels; and with application of additional posterior fossa optimization and high contrast enhancing filters. Visible turbinate length was measured by a single observer using manual tracing. Nasal density heterogeneity was measured using the standard deviation (SD) of mean nasal density from a region of interest in each nasal cavity. Linear mixed-effect models using the R package ‘nlme’, multivariable models and standard post hoc Tukey pair-wise comparisons were performed to investigate the effect of several variables (nasal content, scanning mode, image reconstruction kernel, application of post reconstruction filters) on measured visible total turbinate length and SD of mean nasal density. All canine and feline water-filled nasal slabs showed significantly decreased visibility of nasal turbinates (P < 0.001). High frequency kernels provided the best turbinate visibility and highest SD of aerated nasal slabs, whereas medium frequency kernels were optimal for water-filled nasal slabs. Scanning mode and filter application had no effect on turbinate visibility. PMID:25867935

  12. Isolated Effect of Geometry on Mitral Valve Function for In-Silico Model Development

    PubMed Central

    Siefert, Andrew William; Rabbah, Jean-Pierre Michel; Saikrishnan, Neelakantan; Kunzelman, Karyn Susanne; Yoganathan, Ajit Prithivaraj

    2013-01-01

    Computational models for the heart’s mitral valve (MV) exhibit several uncertainties which may be reduced by further developing these models using ground-truth data sets. The present study generated a ground-truth data set by quantifying the effects of isolated mitral annular flattening, symmetric annular dilatation, symmetric papillary muscle displacement, and asymmetric papillary muscle displacement on leaflet coaptation, mitral regurgitation (MR), and anterior leaflet strain. MVs were mounted in an in vitro left heart simulator and tested under pulsatile hemodynamics. Mitral leaflet coaptation length, coaptation depth, tenting area, MR volume, MR jet direction, and anterior leaflet strain in the radial and circumferential directions were successfully quantified for increasing levels of geometric distortion. From these data, increasing levels of isolated papillary muscle displacement resulted in the greatest mean change in coaptation depth (70% increase), tenting area (150% increase), and radial leaflet strain (37% increase) while annular dilatation resulted in the largest mean change in coaptation length (50% decrease) and regurgitation volume (134% increase). Regurgitant jets were centrally located for symmetric annular dilatation and symmetric papillary muscle displacement. Asymmetric papillary muscle displacement resulted in asymmetrically directed jets. Peak changes in anterior leaflet strain in the circumferential direction were smaller and exhibited non-significant differences across the tested conditions. When used together, this ground-truth data may be used to parametrically evaluate and develop modeling assumptions for both the MV leaflets and subvalvular apparatus. This novel data may improve MV computational models and provide a platform for the development of future surgical planning tools. PMID:24059354

  13. In-silico prediction of concentration-dependent viscosity curves for monoclonal antibody solutions

    PubMed Central

    Tomar, Dheeraj S.; Li, Li; Broulidakis, Matthew P.; Luksha, Nicholas G.; Burns, Christopher T.; Singh, Satish K.; Kumar, Sandeep

    2017-01-01

    ABSTRACT Early stage developability assessments of monoclonal antibody (mAb) candidates can help reduce risks and costs associated with their product development. Forecasting viscosity of highly concentrated mAb solutions is an important aspect of such developability assessments. Reliable predictions of concentration-dependent viscosity behaviors for mAb solutions in platform formulations can help screen or optimize drug candidates for flexible manufacturing and drug delivery options. Here, we present a computational method to predict concentration-dependent viscosity curves for mAbs solely from their sequence—structural attributes. This method was developed using experimental data on 16 different mAbs whose concentration-dependent viscosity curves were experimentally obtained under standardized conditions. Each concentration-dependent viscosity curve was fitted with a straight line, via logarithmic manipulations, and the values for intercept and slope were obtained. Intercept, which relates to antibody diffusivity, was found to be nearly constant. In contrast, slope, the rate of increase in solution viscosity with solute concentration, varied significantly across different mAbs, demonstrating the importance of intermolecular interactions toward viscosity. Next, several molecular descriptors for electrostatic and hydrophobic properties of the 16 mAbs derived using their full-length homology models were examined for potential correlations with the slope. An equation consisting of hydrophobic surface area of full-length antibody and charges on VH, VL, and hinge regions was found to be capable of predicting the concentration-dependent viscosity curves of the antibody solutions. Availability of this computational tool may facilitate material-free high-throughput screening of antibody candidates during early stages of drug discovery and development. PMID:28125318

  14. In-silico prediction of concentration-dependent viscosity curves for monoclonal antibody solutions.

    PubMed

    Tomar, Dheeraj S; Li, Li; Broulidakis, Matthew P; Luksha, Nicholas G; Burns, Christopher T; Singh, Satish K; Kumar, Sandeep

    2017-04-01

    Early stage developability assessments of monoclonal antibody (mAb) candidates can help reduce risks and costs associated with their product development. Forecasting viscosity of highly concentrated mAb solutions is an important aspect of such developability assessments. Reliable predictions of concentration-dependent viscosity behaviors for mAb solutions in platform formulations can help screen or optimize drug candidates for flexible manufacturing and drug delivery options. Here, we present a computational method to predict concentration-dependent viscosity curves for mAbs solely from their sequence-structural attributes. This method was developed using experimental data on 16 different mAbs whose concentration-dependent viscosity curves were experimentally obtained under standardized conditions. Each concentration-dependent viscosity curve was fitted with a straight line, via logarithmic manipulations, and the values for intercept and slope were obtained. Intercept, which relates to antibody diffusivity, was found to be nearly constant. In contrast, slope, the rate of increase in solution viscosity with solute concentration, varied significantly across different mAbs, demonstrating the importance of intermolecular interactions toward viscosity. Next, several molecular descriptors for electrostatic and hydrophobic properties of the 16 mAbs derived using their full-length homology models were examined for potential correlations with the slope. An equation consisting of hydrophobic surface area of full-length antibody and charges on V H , V L , and hinge regions was found to be capable of predicting the concentration-dependent viscosity curves of the antibody solutions. Availability of this computational tool may facilitate material-free high-throughput screening of antibody candidates during early stages of drug discovery and development.

  15. Technical note: Design flood under hydrological uncertainty

    NASA Astrophysics Data System (ADS)

    Botto, Anna; Ganora, Daniele; Claps, Pierluigi; Laio, Francesco

    2017-07-01

    Planning and verification of hydraulic infrastructures require a design estimate of hydrologic variables, usually provided by frequency analysis, and neglecting hydrologic uncertainty. However, when hydrologic uncertainty is accounted for, the design flood value for a specific return period is no longer a unique value, but is represented by a distribution of values. As a consequence, the design flood is no longer univocally defined, making the design process undetermined. The Uncertainty Compliant Design Flood Estimation (UNCODE) procedure is a novel approach that, starting from a range of possible design flood estimates obtained in uncertain conditions, converges to a single design value. This is obtained through a cost-benefit criterion with additional constraints that is numerically solved in a simulation framework. This paper contributes to promoting a practical use of the UNCODE procedure without resorting to numerical computation. A modified procedure is proposed by using a correction coefficient that modifies the standard (i.e., uncertainty-free) design value on the basis of sample length and return period only. The procedure is robust and parsimonious, as it does not require additional parameters with respect to the traditional uncertainty-free analysis. Simple equations to compute the correction term are provided for a number of probability distributions commonly used to represent the flood frequency curve. The UNCODE procedure, when coupled with this simple correction factor, provides a robust way to manage the hydrologic uncertainty and to go beyond the use of traditional safety factors. With all the other parameters being equal, an increase in the sample length reduces the correction factor, and thus the construction costs, while still keeping the same safety level.

  16. Effects of increased apical enlargement on the amount of unprepared areas and coronal dentine removal: a micro-computed tomography study.

    PubMed

    Pérez, A R; Alves, F R F; Marceliano-Alves, M F; Provenzano, J C; Gonçalves, L S; Neves, A A; Siqueira, J F

    2018-06-01

    To evaluate the effects of progressive apical enlargement on the amount of unprepared root canal surface area and remaining dentine thickness. The root canals of 30 extracted mandibular incisors with Vertucci's type I configuration were instrumented with rotary HyFlex CM instruments (Coltene-Whaledent, Altstätten, Switzerland) up to 4 instruments larger than the first one that bound at the working length (WL). Teeth were scanned in a micro-computed tomography (micro-CT) device before canal preparation and after instrumentation with the 2nd, 3rd and 4th larger instruments. The amount of unprepared surface area in the full canal or in the apical 4 mm as well as the remaining dentine thickness at 10 mm from the WL were calculated and compared. The general linear model for repeated measures adjusted by Bonferroni's post hoc test was used for statistic analysis. There was a significant reduction in the amount of unprepared areas after each increase in preparation size (P < 0.01). This was observed for both the full canal length and the 4-mm apical segment. The amount of remaining dentine was also significantly reduced after each file size (P < 0.01). However, dentine thickness always remained greater than 1 mm, even after using the largest instrument. Apical preparations up to 4 instruments larger than the first one to bind at the WL caused a significant progressive reduction in the unprepared canal area. © 2017 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  17. Computer tablet distraction reduces pain and anxiety in pediatric burn patients undergoing hydrotherapy: A randomized trial.

    PubMed

    Burns-Nader, Sherwood; Joe, Lindsay; Pinion, Kelly

    2017-09-01

    Distraction is often used in conjunction with analgesics to minimize pain in pediatric burn patients during treatment procedures. Computer tablets provide many options for distraction items in one tool and are often used during medical procedures. Few studies have examined the effectiveness of tablet distraction in improving the care of pediatric burn patients. This study examines the effectiveness of tablet distraction provided by a child life specialist to minimize pain and anxiety in pediatric burn patients undergoing hydrotherapy. Thirty pediatric patients (4-12) undergoing hydrotherapy for the treatment of burns participated in this randomized clinical trial. The tablet distraction group received tablet distraction provided by a child life specialist while those in the control group received standard care. Pain was assessed through self-reports and observation reports. Anxiety was assessed through behavioral observations. Length of procedure was also recorded. Nurses reported significantly less pain for the tablet distraction group compared to the control group. There was no significant difference between groups on self-reported pain. The tablet distraction group displayed significantly less anxiety during the procedure compared to the control group. Also, the tablet distraction group returned to baseline after the procedure while those in the control group displayed higher anxiety post-procedure. There was no difference in the length of the procedure between groups. These findings suggest tablet distraction provided by a child life specialist may be an effective method for improving pain and anxiety in children undergoing hydrotherapy treatment for burns. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.

  18. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  19. Adaptive statistical iterative reconstruction use for radiation dose reduction in pediatric lower-extremity CT: impact on diagnostic image quality.

    PubMed

    Shah, Amisha; Rees, Mitchell; Kar, Erica; Bolton, Kimberly; Lee, Vincent; Panigrahy, Ashok

    2018-06-01

    For the past several years, increased levels of imaging radiation and cumulative radiation to children has been a significant concern. Although several measures have been taken to reduce radiation dose during computed tomography (CT) scan, the newer dose reduction software adaptive statistical iterative reconstruction (ASIR) has been an effective technique in reducing radiation dose. To our knowledge, no studies are published that assess the effect of ASIR on extremity CT scans in children. To compare radiation dose, image noise, and subjective image quality in pediatric lower extremity CT scans acquired with and without ASIR. The study group consisted of 53 patients imaged on a CT scanner equipped with ASIR software. The control group consisted of 37 patients whose CT images were acquired without ASIR. Image noise, Computed Tomography Dose Index (CTDI) and dose length product (DLP) were measured. Two pediatric radiologists rated the studies in subjective categories: image sharpness, noise, diagnostic acceptability, and artifacts. The CTDI (p value = 0.0184) and DLP (p value <0.0002) were significantly decreased with the use of ASIR compared with non-ASIR studies. However, the subjective ratings for sharpness (p < 0.0001) and diagnostic acceptability of the ASIR images (p < 0.0128) were decreased compared with standard, non-ASIR CT studies. Adaptive statistical iterative reconstruction reduces radiation dose for lower extremity CTs in children, but at the expense of diagnostic imaging quality. Further studies are warranted to determine the specific utility of ASIR for pediatric musculoskeletal CT imaging.

  20. Modified Linear Theory Aircraft Design Tools and Sonic Boom Minimization Strategy Applied to Signature Freezing via F-function Lobe Balancing

    NASA Astrophysics Data System (ADS)

    Jung, Timothy Paul

    Commercial supersonic travel has strong business potential; however, in order for the Federal Aviation Administration to lift its ban on supersonic flight overland, designers must reduce aircraft sonic boom strength to an acceptable level. An efficient methodology and associated tools for designing aircraft for minimized sonic booms are presented. The computer-based preliminary design tool, RapidF, based on modified linear theory, enables quick assessment of an aircraft's sonic boom with run times less than 30 seconds on a desktop computer. A unique feature of RapidF is that it tracks where on the aircraft each segment of the of the sonic boom came from, enabling precise modifications, speeding the design process. Sonic booms from RapidF are compared to flight test data, showing that it is capability of predicting a sonic boom duration, overpressure, and interior shock locations. After the preliminary design is complete, scaled flight tests should be conducted to validate the low boom design. When conducting such tests, it is insufficient to just scale the length; thus, equations to scale the weight and propagation distance are derived. Using RapidF, a conceptual supersonic business jet design is presented that uses F-function lobe balancing to create a frozen sonic boom using lifting surfaces. The leading shock is reduced from 1.4 to 0.83 psf, and the trailing shock from 1.2 to 0.87 psf, 41% and 28% reductions respectfully. By changing the incidence angle of the surfaces, different sonic boom shapes can be created, and allowing the lobes to be re-balanced for new flight conditions. Computational fluid dynamics is conducted to validate the sonic boom predictions. Off-design analysis is presented that varies weight, altitude, Mach number, and propagation angle, demonstrating that lobe-balance is robust. Finally, the Perceived Level of Loudness metric is analyzed, resulting in a modified design that incorporates other boom minimization techniques to further reduce the sonic boom.

  1. A time and imaging cost analysis of low-risk ED observation patients: a conservative 64-section computed tomography coronary angiography "triple rule-out" compared to nuclear stress test strategy.

    PubMed

    Takakuwa, Kevin M; Halpern, Ethan J; Shofer, Frances S

    2011-02-01

    The study aimed to examine time and imaging costs of 2 different imaging strategies for low-risk emergency department (ED) observation patients with acute chest pain or symptoms suggestive of acute coronary syndrome. We compared a "triple rule-out" (TRO) 64-section multidetector computed tomography protocol with nuclear stress testing. This was a prospective observational cohort study of consecutive ED patients who were enrolled in our chest pain observation protocol during a 16-month period. Our standard observation protocol included a minimum of 2 sets of cardiac enzymes at least 6 hours apart followed by a nuclear stress test. Once a week, observation patients were offered a TRO (to evaluate for coronary artery disease, thoracic dissection, and pulmonary embolus) multidetector computed tomography with the option of further stress testing for those patients found to have evidence of coronary artery disease. We analyzed 832 consecutive observation patients including 214 patients who underwent the TRO protocol. Mean total length of stay was 16.1 hours for TRO patients, 16.3 hours for TRO plus other imaging test, 22.6 hours for nuclear stress testing, 23.3 hours for nuclear stress testing plus other imaging tests, and 23.7 hours for nuclear stress testing plus TRO (P < .0001 for TRO and TRO + other test compared to stress test ± other test). Mean imaging times were 3.6, 4.4, 5.9, 7.5, and 6.6 hours, respectively (P < .05 for TRO and TRO + other test compared to stress test ± other test). Mean imaging costs were $1307 for TRO patients vs $945 for nuclear stress testing. Triple rule-out reduced total length of stay and imaging time but incurred higher imaging costs. A per-hospital analysis would be needed to determine if patient time savings justify the higher imaging costs. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Use of Computational Fluid Dynamics for improving freeze-dryers design and process understanding. Part 2: Condenser duct and valve modelling.

    PubMed

    Marchisio, Daniele L; Galan, Miquel; Barresi, Antonello A

    2018-05-05

    This manuscript shows how computational models, mainly based on Computational Fluid Dynamics (CFD), can be used to simulate different parts of an industrial freeze-drying equipment and to properly design them; in particular in this part the duct connecting the chamber with the condenser, with its valves, is considered, while the chamber design and its effect on drying kinetics have been investigated in Part 1. Such an approach allows a much deeper process understanding and assessment of the critical aspects of lyophilisation. This methodology will be demonstrated on freeze-drying equipment of different sizes, investigating influence of valve type (butterfly and mushroom) and shape on duct conductance and critical flow conditions. The role of the inlet and boundary conditions considered has been assessed, also by modelling the whole apparatus including chamber and condenser, and the influence of the duct diameter has been discussed; the results show a little dependence of the relationship between critical mass flux and chamber pressure on the duct size. Results concerning the fluid dynamics of a simple disk valve, a profiled butterfly valve and a mushroom valve installed in a medium size horizontal condenser are presented. Also in these cases the maximum allowable flow when sonic flow conditions are reached can be described by a correlation similar to that found valid for empty ducts; for the mushroom valve the parameters are dependent on the valve opening length. The possibility to use the equivalent length concept, and to extend the validity of the results obtained for empty ducts will be also discussed. Finally the presence of the inert gas modifies the conductance of the duct, reducing the maximum flow rate of water that can be removed through it before the flow is choked; this also requires a proper over-sizing of the duct (or duct-butterfly valve system). Copyright © 2018. Published by Elsevier B.V.

  3. Crack Turning and Arrest Mechanisms for Integral Structure

    NASA Technical Reports Server (NTRS)

    Pettit, Richard; Ingraffea, Anthony

    1999-01-01

    In the course of several years of research efforts to predict crack turning and flapping in aircraft fuselage structures and other problems related to crack turning, the 2nd order maximum tangential stress theory has been identified as the theory most capable of predicting the observed test results. This theory requires knowledge of a material specific characteristic length, and also a computation of the stress intensity factors and the T-stress, or second order term in the asymptotic stress field in the vicinity of the crack tip. A characteristic length, r(sub c), is proposed for ductile materials pertaining to the onset of plastic instability, as opposed to the void spacing theories espoused by previous investigators. For the plane stress case, an approximate estimate of r(sub c), is obtained from the asymptotic field for strain hardening materials given by Hutchinson, Rice and Rosengren (HRR). A previous study using of high order finite element methods to calculate T-stresses by contour integrals resulted in extremely high accuracy values obtained for selected test specimen geometries, and a theoretical error estimation parameter was defined. In the present study, it is shown that a large portion of the error in finite element computations of both K and T are systematic, and can be corrected after the initial solution if the finite element implementation utilizes a similar crack tip discretization scheme for all problems. This scheme is applied for two-dimensional problems to a both a p-version finite element code, showing that sufficiently accurate values of both K(sub I) and T can be obtained with fairly low order elements if correction is used. T-stress correction coefficients are also developed for the singular crack tip rosette utilized in the adaptive mesh finite element code FRANC2D, and shown to reduce the error in the computed T-stress significantly. Stress intensity factor correction was not attempted for FRANC2D because it employs a highly accurate quarter-point scheme to obtain stress intensity factors.

  4. Stepping strategies for regulating gait adaptability and stability.

    PubMed

    Hak, Laura; Houdijk, Han; Steenbrink, Frans; Mert, Agali; van der Wurff, Peter; Beek, Peter J; van Dieën, Jaap H

    2013-03-15

    Besides a stable gait pattern, gait in daily life requires the capability to adapt this pattern in response to environmental conditions. The purpose of this study was to elucidate the anticipatory strategies used by able-bodied people to attain an adaptive gait pattern, and how these strategies interact with strategies used to maintain gait stability. Ten healthy subjects walked in a Computer Assisted Rehabilitation ENvironment (CAREN). To provoke an adaptive gait pattern, subjects had to hit virtual targets, with markers guided by their knees, while walking on a self-paced treadmill. The effects of walking with and without this task on walking speed, step length, step frequency, step width and the margins of stability (MoS) were assessed. Furthermore, these trials were performed with and without additional continuous ML platform translations. When an adaptive gait pattern was required, subjects decreased step length (p<0.01), tended to increase step width (p=0.074), and decreased walking speed while maintaining similar step frequency compared to unconstrained walking. These adaptations resulted in the preservation of equal MoS between trials, despite the disturbing influence of the gait adaptability task. When the gait adaptability task was combined with the balance perturbation subjects further decreased step length, as evidenced by a significant interaction between both manipulations (p=0.012). In conclusion, able-bodied people reduce step length and increase step width during walking conditions requiring a high level of both stability and adaptability. Although an increase in step frequency has previously been found to enhance stability, a faster movement, which would coincide with a higher step frequency, hampers accuracy and may consequently limit gait adaptability. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Effects of Orthographic and Phonological Word Length on Memory for Lists Shown at RSVP and STM Rates

    ERIC Educational Resources Information Center

    Coltheart, Veronika; Mondy, Stephen; Dux, Paul E.; Stephenson, Lisa

    2004-01-01

    This article reports 3 experiments in which effects of orthographic and phonological word length on memory were examined for short lists shown at rapid serial visual presentation (RSVP) and short-term memory (STM) rates. Only visual-orthographic length reduced RSVP serial recall, whereas both orthographic and phonological length lowered recall for…

  6. Factors Influencing Trainee Participation in Computer Software Applications Training.

    ERIC Educational Resources Information Center

    Alexander, Melody Webler

    1993-01-01

    Participants (n=130) who had completed training in WordPerfect, Lotus 1-2-3, and dBase III+ completed a questionnaire related to demographic characteristics and factors that influence training participation. Trainees are participating in computer training for personal reasons, seeking convenient time, location, and length. Child care or…

  7. Abelson tyrosine-protein kinase 2 regulates myoblast proliferation and controls muscle fiber length

    PubMed Central

    Lee, Jennifer K; Hallock, Peter T

    2017-01-01

    Muscle fiber length is nearly uniform within a muscle but widely different among different muscles. We show that Abelson tyrosine-protein kinase 2 (Abl2) has a key role in regulating myofiber length, as a loss of Abl2 leads to excessively long myofibers in the diaphragm, intercostal and levator auris muscles but not limb muscles. Increased myofiber length is caused by enhanced myoblast proliferation, expanding the pool of myoblasts and leading to increased myoblast fusion. Abl2 acts in myoblasts, but as a consequence of expansion of the diaphragm muscle, the diaphragm central tendon is reduced in size, likely contributing to reduced stamina of Abl2 mutant mice. Ectopic muscle islands, each composed of myofibers of uniform length and orientation, form within the central tendon of Abl2+/− mice. Specialized tendon cells, resembling tendon cells at myotendinous junctions, form at the ends of these muscle islands, suggesting that myofibers induce differentiation of tendon cells, which reciprocally regulate myofiber length and orientation. PMID:29231808

  8. Abelson tyrosine-protein kinase 2 regulates myoblast proliferation and controls muscle fiber length.

    PubMed

    Lee, Jennifer K; Hallock, Peter T; Burden, Steven J

    2017-12-12

    Muscle fiber length is nearly uniform within a muscle but widely different among different muscles. We show that Abelson tyrosine-protein kinase 2 (Abl2) has a key role in regulating myofiber length, as a loss of Abl2 leads to excessively long myofibers in the diaphragm, intercostal and levator auris muscles but not limb muscles. Increased myofiber length is caused by enhanced myoblast proliferation, expanding the pool of myoblasts and leading to increased myoblast fusion. Abl2 acts in myoblasts, but as a consequence of expansion of the diaphragm muscle, the diaphragm central tendon is reduced in size, likely contributing to reduced stamina of Abl2 mutant mice. Ectopic muscle islands, each composed of myofibers of uniform length and orientation, form within the central tendon of Abl2 +/- mice. Specialized tendon cells, resembling tendon cells at myotendinous junctions, form at the ends of these muscle islands, suggesting that myofibers induce differentiation of tendon cells, which reciprocally regulate myofiber length and orientation.

  9. Slotting Fins of Heat Exchangers to Provide Thermal Breaks

    NASA Technical Reports Server (NTRS)

    Scull, Timothy D.

    2003-01-01

    Heat exchangers that include slotted fins (in contradistinction to continuous fins) have been invented. The slotting of the fins provides thermal breaks that reduce thermal conduction along flow paths (longitudinal thermal conduction), which reduces heat-transfer efficiency. By increasing the ratio between transverse thermal conduction (the desired heat-transfer conduction) and longitudinal thermal conduction, slotting of the fins can be exploited to (1) increase heat-transfer efficiency (thereby reducing operating cost) for a given heat-exchanger length or to (2) reduce the length (thereby reducing the weight and/or cost) of the heat exchanger needed to obtain a given heat transfer efficiency. By reducing the length of a heat exchanger, one can reduce the pressure drop associated with the flow through it. In a case in which slotting enables the use of fins with thermal conductivity greater than could otherwise be tolerated on the basis of longitudinal thermal conduction, one can exploit the conductivity to make the fins longer (in the transverse direction) than they otherwise could be, thereby making it possible to make a heat exchanger that contains fewer channels and therefore, that weighs less, contains fewer potential leak paths, and can be constructed from fewer parts and, hence, reduced cost.

  10. Noise prediction of a subsonic turbulent round jet using the lattice-Boltzmann method

    PubMed Central

    Lew, Phoi-Tack; Mongeau, Luc; Lyrintzis, Anastasios

    2010-01-01

    The lattice-Boltzmann method (LBM) was used to study the far-field noise generated from a Mach, Mj=0.4, unheated turbulent axisymmetric jet. A commercial code based on the LBM kernel was used to simulate the turbulent flow exhausting from a pipe which is 10 jet radii in length. Near-field flow results such as jet centerline velocity decay rates and turbulence intensities were in agreement with experimental results and results from comparable LES studies. The predicted far field sound pressure levels were within 2 dB from published experimental results. Weak unphysical tones were present at high frequency in the computed radiated sound pressure spectra. These tones are believed to be due to spurious sound wave reflections at boundaries between regions of varying voxel resolution. These “VR tones” did not appear to bias the underlying broadband noise spectrum, and they did not affect the overall levels significantly. The LBM appears to be a viable approach, comparable in accuracy to large eddy simulations, for the problem considered. The main advantages of this approach over Navier–Stokes based finite difference schemes may be a reduced computational cost, ease of including the nozzle in the computational domain, and ease of investigating nozzles with complex shapes. PMID:20815448

  11. libFLASM: a software library for fixed-length approximate string matching.

    PubMed

    Ayad, Lorraine A K; Pissis, Solon P P; Retha, Ahmad

    2016-11-10

    Approximate string matching is the problem of finding all factors of a given text that are at a distance at most k from a given pattern. Fixed-length approximate string matching is the problem of finding all factors of a text of length n that are at a distance at most k from any factor of length ℓ of a pattern of length m. There exist bit-vector techniques to solve the fixed-length approximate string matching problem in time [Formula: see text] and space [Formula: see text] under the edit and Hamming distance models, where w is the size of the computer word; as such these techniques are independent of the distance threshold k or the alphabet size. Fixed-length approximate string matching is a generalisation of approximate string matching and, hence, has numerous direct applications in computational molecular biology and elsewhere. We present and make available libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching under both the edit and the Hamming distance models. Moreover we describe how fixed-length approximate string matching is applied to solve real problems by incorporating libFLASM into established applications for multiple circular sequence alignment as well as single and structured motif extraction. Specifically, we describe how it can be used to improve the accuracy of multiple circular sequence alignment in terms of the inferred likelihood-based phylogenies; and we also describe how it is used to efficiently find motifs in molecular sequences representing regulatory or functional regions. The comparison of the performance of the library to other algorithms show how it is competitive, especially with increasing distance thresholds. Fixed-length approximate string matching is a generalisation of the classic approximate string matching problem. We present libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching. The extensive experimental results presented here suggest that other applications could benefit from using libFLASM, and thus further maintenance and development of libFLASM is desirable.

  12. Does overgrowth of costal cartilage cause pectus carinatum? A three-dimensional computed tomography evaluation of rib length and costal cartilage length in patients with asymmetric pectus carinatum

    PubMed Central

    Park, Chul Hwan; Kim, Tae Hoon; Haam, Seok Jin; Lee, Sungsoo

    2013-01-01

    OBJECTIVES To evaluate whether the overgrowth of costal cartilage may cause pectus carinatum using three-dimensional (3D) computed tomography (CT). METHODS Twenty-two patients with asymmetric pectus carinatum were included. The fourth, fifth and sixth ribs and costal cartilages were semi-automatically traced, and their full lengths were measured on three-dimensional CT images using curved multi-planar reformatted (MPR) techniques. The rib length and costal cartilage length, the total combined length of the rib and costal cartilage and the ratio of the cartilage and rib lengths (C/R ratio) in each patient were compared between the protruding side and the opposite side at the levels of the fourth, fifth and sixth ribs. RESULTS The length of the costal cartilage was not different between the more protruded side and the contralateral side (55.8 ± 9.8 mm vs 55.9 ± 9.3 mm at the fourth, 70 ± 10.8 mm vs 71.6 ± 10.8 mm at the fifth and 97.8 ± 13.2 mm vs 99.8 ± 15.5 mm at the sixth; P > 0.05). There were also no significant differences between the lengths of ribs. (265.8 ± 34.9 mm vs 266.3 ± 32.9 mm at the fourth, 279.7 ± 32.7 mm vs 280.6 ± 32.4 mm at the fifth and 283.8 ± 33.9 mm vs 283.9 ± 32.3 mm at the sixth; P > 0.05). There was no statistically significant difference in either the total length of rib and costal cartilage or the C/R ratio according to side of the chest (P > 0.05). CONCLUSIONS In patients with asymmetric pectus carinatum, the lengths of the fourth, fifth and sixth costal cartilage on the more protruded side were not different from those on the contralateral side. These findings suggest that overgrowth of costal cartilage cannot explain the asymmetric protrusion of anterior chest wall and may not be the main cause of pectus carinatum. PMID:23868604

  13. Does overgrowth of costal cartilage cause pectus carinatum? A three-dimensional computed tomography evaluation of rib length and costal cartilage length in patients with asymmetric pectus carinatum.

    PubMed

    Park, Chul Hwan; Kim, Tae Hoon; Haam, Seok Jin; Lee, Sungsoo

    2013-11-01

    To evaluate whether the overgrowth of costal cartilage may cause pectus carinatum using three-dimensional (3D) computed tomography (CT). Twenty-two patients with asymmetric pectus carinatum were included. The fourth, fifth and sixth ribs and costal cartilages were semi-automatically traced, and their full lengths were measured on three-dimensional CT images using curved multi-planar reformatted (MPR) techniques. The rib length and costal cartilage length, the total combined length of the rib and costal cartilage and the ratio of the cartilage and rib lengths (C/R ratio) in each patient were compared between the protruding side and the opposite side at the levels of the fourth, fifth and sixth ribs. The length of the costal cartilage was not different between the more protruded side and the contralateral side (55.8 ± 9.8 mm vs 55.9 ± 9.3 mm at the fourth, 70 ± 10.8 mm vs 71.6 ± 10.8 mm at the fifth and 97.8 ± 13.2 mm vs 99.8 ± 15.5 mm at the sixth; P > 0.05). There were also no significant differences between the lengths of ribs. (265.8 ± 34.9 mm vs 266.3 ± 32.9 mm at the fourth, 279.7 ± 32.7 mm vs 280.6 ± 32.4 mm at the fifth and 283.8 ± 33.9 mm vs 283.9 ± 32.3 mm at the sixth; P > 0.05). There was no statistically significant difference in either the total length of rib and costal cartilage or the C/R ratio according to side of the chest (P > 0.05). In patients with asymmetric pectus carinatum, the lengths of the fourth, fifth and sixth costal cartilage on the more protruded side were not different from those on the contralateral side. These findings suggest that overgrowth of costal cartilage cannot explain the asymmetric protrusion of anterior chest wall and may not be the main cause of pectus carinatum.

  14. A study of pilot modeling in multi-controller tasks

    NASA Technical Reports Server (NTRS)

    Whitbeck, R. F.; Knight, J. R.

    1972-01-01

    A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.

  15. Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid

    PubMed Central

    Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz

    2017-01-01

    Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead. PMID:28736582

  16. Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.

    PubMed

    Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin

    2015-01-01

    Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.

  17. Changes in Flat Plate Wake Characteristics Obtained With Decreasing Plate Thickness

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2016-01-01

    The near and very near wake of a flat plate with a circular trailing edge is investigated with data from direct numerical simulations. Computations were performed for four different Reynolds numbers based on plate thickness (D) and at constant plate length. The value of ?/D varies by a factor of approximately 20 in the computations (? being the boundary layer momentum thickness at the trailing edge). The separating boundary layers are turbulent in all the cases. One objective of the study is to understand the changes in wake characteristics as the plate thickness is reduced (increasing ?/D). Vortex shedding is vigorous in the low ?/D cases with a substantial decrease in shedding intensity in the largest ?/D case (for all practical purposes shedding becomes almost intermittent). Other characteristics that are significantly altered with increasing ?/D are the roll-up of the detached shear layers and the magnitude of fluctuations in shedding period. These effects are explored in depth. The effects of changing ?/D on the distributions of the time-averaged, near-wake velocity statistics are discussed.

  18. Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.

    PubMed

    Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz

    2016-01-01

    Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.

  19. Effects of forebody geometry on subsonic boundary-layer stability

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1990-01-01

    As part of an effort to develop computational techniques for design of natural laminar flow fuselages, a computational study was made of the effect of forebody geometry on laminar boundary layer stability on axisymmetric body shapes. The effects of nose radius on the stability of the incompressible laminar boundary layer was computationally investigated using linear stability theory for body length Reynolds numbers representative of small and medium-sized airplanes. The steepness of the pressure gradient and the value of the minimum pressure (both functions of fineness ratio) govern the stability of laminar flow possible on an axisymmetric body at a given Reynolds number. It was found that to keep the laminar boundary layer stable for extended lengths, it is important to have a small nose radius. However, nose shapes with extremely small nose radii produce large pressure peaks at off-design angles of attack and can produce vortices which would adversely affect transition.

  20. Reduced Capillary Length Scale in the Application of Ostwald Ripening Theory to the Coarsening of Charged Colloidal Crystals in Electrolyte Solutions

    NASA Astrophysics Data System (ADS)

    Rowe, Jeffrey D.; Baird, James K.

    2007-06-01

    A colloidal crystal suspended in an electrolyte solution will ordinarily exchange ions with the surrounding solution and develop a net surface charge density and a corresponding double layer. The interfacial tension of the charged surface has contributions arising from: (a) background interfacial tension of the uncharged surface, (b) the entropy associated with the adsorption of ions on the surface, and (c) the polarizing effect of the electrostatic field within the double layer. The adsorption and polarization effects make negative contributions to the surface free energy and serve to reduce the interfacial tension below the value to be expected for the uncharged surface. The diminished interfacial tension leads to a reduced capillary length scale. According to the Ostwald ripening theory of particle coarsening, the reduced capillary length will cause the solute supersaturation to decay more rapidly and the colloidal particles to be smaller in size and greater in number than in the absence of the double layer. Although the length scale for coarsening should be little affected in the case of inorganic colloids, such as AgI, it should be greatly reduced in the case of suspensions of protein crystals, such as apoferritin, catalase, and thaumatin.

  1. Evaluation of a dedicated MDCT protocol using iterative image reconstruction after cervical spine trauma.

    PubMed

    Geyer, L L; Körner, M; Hempel, R; Deak, Z; Mueck, F G; Linsenmaier, U; Reiser, M F; Wirth, S

    2013-07-01

    To evaluate radiation exposure for 64-row computed tomography (CT) of the cervical spine comparing two optimized protocols using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR), respectively. Sixty-seven studies using FBP (scanner 1) were retrospectively compared with 80 studies using ASIR (scanner 2). The key scanning parameters were identical (120 kV dose modulation, 64 × 0.625 mm collimation, pitch 0.531:1). In protocol 2, the noise index (NI) was increased from 5 to 25, and ASIR and the high-definition (HD) mode were used. The scan length, CT dose index (CTDI), and dose-length product (DLP) were recorded. The image quality was analysed subjectively by using a three-point scale (0; 1; 2), and objectively by using a region of interest (ROI) analysis. Mann-Whitney U and Wilcoxon's test were used. In the FBP group, the mean CTDI was 21.43 mGy, mean scan length 186.3 mm, and mean DLP 441.15 mGy cm. In the ASIR group, the mean CTDI was 9.57 mGy, mean scan length 195.21 mm, and mean DLP 204.23 mGy cm. The differences were significant for CTDI and DLP (p < 0.001) and scan length (p = 0.01). There was no significant difference in the subjective image quality (p > 0.05). The estimated mean effective dose decreased from 2.38 mSv (FBP) to 1.10 mSv (ASIR). The radiation dose of 64-row MDCT can be reduced to a level comparable to plain radiography without loss of subjective image quality by implementation of ASIR in a dedicated cervical spine trauma protocol. These results might contribute to an improved relative risk-to-benefit ratio and support the justification of CT as a first-line imaging tool to evaluate cervical spine trauma. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  2. Theory of a Traveling Wave Feed for a Planar Slot Array Antenna

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam

    2012-01-01

    Planar arrays of waveguide-fed slots have been employed in many radar and remote sensing applications. Such arrays are designed in the standing wave configuration because of high efficiency. Traveling wave arrays can produce greater bandwidth at the expense of efficiency due to power loss in the load or loads. Traveling wave planar slot arrays may be designed with a long feed waveguide consisting of centered-inclined coupling slots. The feed waveguide is terminated in a matched load, and the element spacing in the feed waveguide is chosen to produce a beam squinted from the broadside. The traveling wave planar slot array consists of a long feed waveguide containing resonant-centered inclined coupling slots in the broad wall, coupling power into an array of stacked radiating waveguides orthogonal to it. The radiating waveguides consist of longitudinal offset radiating slots in a standing wave configuration. For the traveling wave feed of a planar slot array, one has to design the tilt angle and length of each coupling slot such that the amplitude and phase of excitation of each radiating waveguide are close to the desired values. The coupling slot spacing is chosen for an appropriate beam squint. Scattering matrix parameters of resonant coupling slots are used in the design process to produce appropriate excitations of radiating waveguides with constraints placed only on amplitudes. Since the radiating slots in each radiating waveguide are designed to produce a certain total admittance, the scattering (S) matrix of each coupling slot is reduced to a 2x2 matrix. Elements of each 2x2 S-matrix and the amount of coupling into the corresponding radiating waveguide are expressed in terms of the element S11. S matrices are converted into transmission (T) matrices, and the T matrices are multiplied to cascade the coupling slots and waveguide sections, starting from the load end and proceeding towards the source. While the use of non-resonant coupling slots may provide an additional degree of freedom in the design, resonant coupling slots simplify the design process. The amplitude of the wave going to the load is set at unity. The S11 parameter, r of the coupling slot closest to the load, is assigned an arbitrary value. A larger value of r will reduce the power dissipated in the load while increasing the reflection coefficient at the input port. It is now possible to obtain the excitation of the radiating waveguide closest to the load and the coefficients of the wave incident and reflected at the input port of this coupling slot. The next coupling slot parameter, r , is chosen to realize the excitation of that radiating waveguide. One continues this process moving towards the source, until all the coupling slot parameters r and hence the S11 parameter of the 4-port coupler, r, are known for each coupling slot. The goal is to produce the desired array aperture distribution in the feed direction. From an interpolation of the computed moment method data for the slot parameters, all the coupling slot tilt angles and lengths are obtained. From the excitations of the radiating waveguides computed from the coupling values, radiating slot parameters may be obtained so as to attain the desired total normalized slot admittances. This process yields the radiating slot parameters, offsets, and lengths. The design is repeated by choosing different values of r for the last coupling slot until the percentage of power dissipated in the load and the input reflection coefficient values are satisfactory. Numerical results computed for the radiation pattern, the tilt angles and lengths of coupling slots, and excitation phases of the radiating waveguides, are presented for an array with uniform amplitude excitation. The design process has been validated using computer simulations. This design procedure is valid for non-uniform amplitude excitations as well.

  3. Condition monitoring and fault diagnosis of motor bearings using undersampled vibration signals from a wireless sensor network

    NASA Astrophysics Data System (ADS)

    Lu, Siliang; Zhou, Peng; Wang, Xiaoxian; Liu, Yongbin; Liu, Fang; Zhao, Jiwen

    2018-02-01

    Wireless sensor networks (WSNs) which consist of miscellaneous sensors are used frequently in monitoring vital equipment. Benefiting from the development of data mining technologies, the massive data generated by sensors facilitate condition monitoring and fault diagnosis. However, too much data increase storage space, energy consumption, and computing resource, which can be considered fatal weaknesses for a WSN with limited resources. This study investigates a new method for motor bearings condition monitoring and fault diagnosis using the undersampled vibration signals acquired from a WSN. The proposed method, which is a fusion of the kurtogram, analog domain bandpass filtering, bandpass sampling, and demodulated resonance technique, can reduce the sampled data length while retaining the monitoring and diagnosis performance. A WSN prototype was designed, and simulations and experiments were conducted to evaluate the effectiveness and efficiency of the proposed method. Experimental results indicated that the sampled data length and transmission time of the proposed method result in a decrease of over 80% in comparison with that of the traditional method. Therefore, the proposed method indicates potential applications on condition monitoring and fault diagnosis of motor bearings installed in remote areas, such as wind farms and offshore platforms.

  4. Dynamic probability of reinforcement for cooperation: Random game termination in the centipede game.

    PubMed

    Krockow, Eva M; Colman, Andrew M; Pulford, Briony D

    2018-03-01

    Experimental games have previously been used to study principles of human interaction. Many such games are characterized by iterated or repeated designs that model dynamic relationships, including reciprocal cooperation. To enable the study of infinite game repetitions and to avoid endgame effects of lower cooperation toward the final game round, investigators have introduced random termination rules. This study extends previous research that has focused narrowly on repeated Prisoner's Dilemma games by conducting a controlled experiment of two-player, random termination Centipede games involving probabilistic reinforcement and characterized by the longest decision sequences reported in the empirical literature to date (24 decision nodes). Specifically, we assessed mean exit points and cooperation rates, and compared the effects of four different termination rules: no random game termination, random game termination with constant termination probability, random game termination with increasing termination probability, and random game termination with decreasing termination probability. We found that although mean exit points were lower for games with shorter expected game lengths, the subjects' cooperativeness was significantly reduced only in the most extreme condition with decreasing computer termination probability and an expected game length of two decision nodes. © 2018 Society for the Experimental Analysis of Behavior.

  5. Some effects of swirl on turbulent mixing and combustion

    NASA Technical Reports Server (NTRS)

    Rubel, A.

    1972-01-01

    A general formulation of some effects of swirl on turbulent mixing is given. The basis for the analysis is that momentum transport is enhanced by turbulence resulting from rotational instability of the fluid field. An appropriate form for the turbulent eddy viscosity is obtained by mixing length type arguments. The result takes the form of a corrective factor that is a function of the swirl and acts to increase the eddy viscosity. The factor is based upon the initial mixing conditions implying that the rotational turbulence decays in a manner similar to that of free shear turbulence. Existing experimental data for free jet combustion are adequately matched by using the modifying factor to relate the effects of swirl on eddy viscosity. The model is extended and applied to the supersonic combustion of a ring jet of hydrogen injected into a constant area annular air stream. The computations demonstrate that swirling the flow could: (1) reduce the burning length by one half, (2) result in more uniform burning across the annulus width, and (3) open the possibility of optimization of the combustion characteristics by locating the fuel jet between the inner wall and center of the annulus width.

  6. Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments

    NASA Astrophysics Data System (ADS)

    Makri, Nancy

    2017-04-01

    The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.

  7. Quantification of complex modular architecture in plants.

    PubMed

    Reeb, Catherine; Kaandorp, Jaap; Jansson, Fredrik; Puillandre, Nicolas; Dubuisson, Jean-Yves; Cornette, Raphaël; Jabbour, Florian; Coudert, Yoan; Patiño, Jairo; Flot, Jean-François; Vanderpoorten, Alain

    2018-04-01

    Morphometrics, the assignment of quantities to biological shapes, is a powerful tool to address taxonomic, evolutionary, functional and developmental questions. We propose a novel method for shape quantification of complex modular architecture in thalloid plants, whose extremely reduced morphologies, combined with the lack of a formal framework for thallus description, have long rendered taxonomic and evolutionary studies extremely challenging. Using graph theory, thalli are described as hierarchical series of nodes and edges, allowing for accurate, homologous and repeatable measurements of widths, lengths and angles. The computer program MorphoSnake was developed to extract the skeleton and contours of a thallus and automatically acquire, at each level of organization, width, length, angle and sinuosity measurements. Through the quantification of leaf architecture in Hymenophyllum ferns (Polypodiopsida) and a fully worked example of integrative taxonomy in the taxonomically challenging thalloid liverwort genus Riccardia, we show that MorphoSnake is applicable to all ramified plants. This new possibility of acquiring large numbers of quantitative traits in plants with complex modular architectures opens new perspectives of applications, from the development of rapid species identification tools to evolutionary analyses of adaptive plasticity. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.

  8. A study of isotropic-nematic transition of quadrupolar Gay-Berne fluid using density-functional theory approach

    NASA Astrophysics Data System (ADS)

    Singh, Ram Chandra; Ram, Jokhan

    2011-11-01

    The effects of quadrupole moments on the isotropic-nematic (IN) phase transitions are studied using the density-functional theory (DFT) for a Gay-Berne (GB) fluid for a range of length-to-breadth parameters ? in the reduced temperature range ? . The pair-correlation functions of the isotropic phase, which enter into the DFT as input parameters are found by solving the Percus-Yevick integral equation theory. The method used involves an expansion of angle-dependent functions appearing in the integral equations in terms of spherical harmonics and the harmonic coefficients are obtained by an iterative algorithm. All the terms of harmonic coefficients which involve l indices up to less than or equal to 6 are considered. The numerical accuracy of the results depends on the number of spherical harmonic coefficients considered for each orientation-dependent function. As the length-to-breadth ratio of quadrupolar GB molecules is increased, the IN transition is seen to move to lower density (and pressure) at a given temperature. It has been observed that the DFT is good to study the IN transitions in such fluids. The theoretical results have also been compared with the computer simulation results wherever they are available.

  9. Self-avoiding walks on scale-free networks

    NASA Astrophysics Data System (ADS)

    Herrero, Carlos P.

    2005-01-01

    Several kinds of walks on complex networks are currently used to analyze search and navigation in different systems. Many analytical and computational results are known for random walks on such networks. Self-avoiding walks (SAW’s) are expected to be more suitable than unrestricted random walks to explore various kinds of real-life networks. Here we study long-range properties of random SAW’s on scale-free networks, characterized by a degree distribution P (k) ˜ k-γ . In the limit of large networks (system size N→∞ ), the average number sn of SAW’s starting from a generic site increases as μn , with μ= < k2 > / -1 . For finite N , sn is reduced due to the presence of loops in the network, which causes the emergence of attrition of the paths. For kinetic growth walks, the average maximum length increases as a power of the system size: ˜ Nα , with an exponent α increasing as the parameter γ is raised. We discuss the dependence of α on the minimum allowed degree in the network. A similar power-law dependence is found for the mean self-intersection length of nonreversal random walks. Simulation results support our approximate analytical calculations.

  10. Performance of Fourth-Grade Students in the 2012 NAEP Computer-Based Writing Pilot Assessment: Scores, Text Length, and Use of Editing Tools. Working Paper Series. NCES 2015-119

    ERIC Educational Resources Information Center

    White, Sheida; Kim, Young Yee; Chen, Jing; Liu, Fei

    2015-01-01

    This study examined whether or not fourth-graders could fully demonstrate their writing skills on the computer and factors associated with their performance on the National Assessment of Educational Progress (NAEP) computer-based writing assessment. The results suggest that high-performing fourth-graders (those who scored in the upper 20 percent…

  11. Mapping Computation with No Memory

    NASA Astrophysics Data System (ADS)

    Burckel, Serge; Gioan, Emeric; Thomé, Emmanuel

    We investigate the computation of mappings from a set S n to itself with in situ programs, that is using no extra variables than the input, and performing modifications of one component at a time. We consider several types of mappings and obtain effective computation and decomposition methods, together with upper bounds on the program length (number of assignments). Our technique is combinatorial and algebraic (graph coloration, partition ordering, modular arithmetics).

  12. Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org; University of Texas Houston Graduate School of Biomedical Science, Houston, Texas; Yang, Jinzhong

    Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method wasmore » validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced with anatomic correction methods.« less

  13. Measurement of the Correlation and Coherence Lengths in Boundary Layer Flight Data

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    2011-01-01

    Wall pressure data acquired during flight tests at several flight conditions are analyzed and the correlation and coherence lengths of the data reported. It is shown how the frequency bandwidth of the analysis biases the correlation length and how the convection of the flow acts to reduce the coherence length. Coherence lengths measured in the streamwise direction appear much longer than would be expected based on classical results for flow over a flat plat.

  14. Length and area equivalents for interpreting wildland resource maps

    Treesearch

    Elliot L. Amidon; Marilyn S. Whitfield

    1969-01-01

    Map users must refer to an appropriate scale in interpreting wildland resource maps. Length and area equivalents for nine map scales commonly used have been computed. For each scale a 1-page table consists of map-to-ground equivalents, buffer strip or road widths, and cell dimensions required for a specified acreage. The conversion factors are stored in a Fortran...

  15. Simulating cut-to-length harvesting operations in Appalachian hardwoods

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Yaoxiang Li

    2005-01-01

    Cut-to-length (CTL) harvesting systems involving small and large harvesters and a forwarder were simulated using a modular computer simulation model. The two harvesters simulated were a modified John Deere 988 tracked excavator with a single grip sawhead and a Timbco T425 based excavator with a single grip sawhead. The forwarder used in the simulations was a Valmet 524...

  16. Design Rules for Tailoring Antireflection Properties of Hierarchical Optical Structures

    DOE PAGES

    Leon, Juan J. Diaz; Hiszpanski, Anna M.; Bond, Tiziana C.; ...

    2017-05-18

    Hierarchical structures consisting of small sub-wavelength features stacked atop larger structures have been demonstrated as an effective means of reducing the reflectance of surfaces. However, optical devices require different antireflective properties depending on the application, and general unifying guidelines on hierarchical structures' design to attain a desired antireflection spectral response are still lacking. The type of reflectivity (diffuse, specular, or total/hemispherical) and its angular- and spectral-dependence are all dictated by the structural parameters. Through computational and experimental studies, guidelines have been devised to modify these various aspects of reflectivity across the solar spectrum by proper selection of the features ofmore » hierarchical structures. In this wavelength regime, micrometer-scale substructures dictate the long-wavelength spectral response and effectively reduce specular reflectance, whereas nanometer-scale substructures dictate primarily the visible wavelength spectral response and reduce diffuse reflectance. Coupling structures having these two length scales into hierarchical arrays impressively reduces surfaces' hemispherical reflectance across a broad spectrum of wavelengths and angles. Furthermore, such hierarchical structures in silicon are demonstrated having an average total reflectance across the solar spectrum of 1.1% (average weighted reflectance of 1% in the 280–2500 nm range of the AM 1.5 G spectrum) and specular reflectance <1% even at angles of incidence as high as 67°.« less

  17. Can aircraft noise less than or equal 115 to dBA adversely affect reproductive outcome in USAF women?

    NASA Astrophysics Data System (ADS)

    Brubaker, P. A.

    1985-06-01

    It has been suggested, mainly through animal studies, that exposure to high noise levels may be associated with lower birth weight, reduced gestational length and other adverse reproductive outcomes. Few studies have been done on humans to show this association. The Air Force employs pregnant women in areas where there is a high potential for exposure to high noise levels. This study proposes a method to determine if there is an association between high frequency noise levels or = 115 dBA and adverse reproductive outcomes through a review of records and self-administered questionnaires in a case-comparison design. Prevelance rates will be calculated and a multiple logistic regression analysis computed for the independent variables that can affect reproduction.

  18. Scaled Runge-Kutta algorithms for handling dense output

    NASA Technical Reports Server (NTRS)

    Horn, M. K.

    1981-01-01

    Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.

  19. Particle-in-a-box model of one-dimensional excitons in conjugated polymers

    NASA Astrophysics Data System (ADS)

    Pedersen, Thomas G.; Johansen, Per M.; Pedersen, Henrik C.

    2000-04-01

    A simple two-particle model of excitons in conjugated polymers is proposed as an alternative to usual highly computationally demanding quantum chemical methods. In the two-particle model, the exciton is described as an electron-hole pair interacting via Coulomb forces and confined to the polymer backbone by rigid walls. Furthermore, by integrating out the transverse part, the two-particle equation is reduced to one-dimensional form. It is demonstrated how essentially exact solutions are obtained in the cases of short and long conjugation length, respectively. From a linear combination of these cases an approximate solution for the general case is obtained. As an application of the model the influence of a static electric field on the electron-hole overlap integral and exciton energy is considered.

  20. Long frame sync words for binary PSK telemetry

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1975-01-01

    Correlation criteria have previously been established for identifying whether a given binary sequence would be a good frame sync word for phase-shift keyed telemetry. In the past, the search for a good K-bit sync word has involved the application of these criteria to the entire set of 2 exponent K binary K-tuples. It is shown that restricting this search to a much smaller subset consisting of K-bit prefixes of pseudonoise sequences results in sync words of comparable quality, with greatly reduced computer search times for larger values of K. As an example, this procedure is used to find good sync words of length 16-63; from a storage viewpoint, each of these sequences can be generated by a 5- or 6-bit linear feedback shift register.

  1. Parameter Optimization of Pseudo-Rigid-Body Models of MRI-Actuated Catheters

    PubMed Central

    Greigarn, Tipakorn; Liu, Taoming; Çavuşoğlu, M. Cenk

    2016-01-01

    Simulation and control of a system containing compliant mechanisms such as cardiac catheters often incur high computational costs. One way to reduce the costs is to approximate the mechanisms with Pseudo-Rigid-Body Models (PRBMs). A PRBM generally consists of rigid links connected by spring-loaded revolute joints. The lengths of the rigid links and the stiffnesses of the springs are usually chosen to minimize the tip deflection differences between the PRBM and the compliant mechanism. In most applications, only the relationship between end load and tip deflection is considered. This is obviously not applicable for MRI-actuated catheters which is actuated by the coils attached to the body. This paper generalizes PRBM parameter optimization to include loading and reference points along the body. PMID:28261009

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfram, Phillip J.; Ringler, Todd D.; Maltrud, Mathew E.

    Isopycnal diffusivity due to stirring by mesoscale eddies in an idealized, wind-forced, eddying, midlatitude ocean basin is computed using Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT). Simulation is performed via LIGHT within the Model for Prediction across Scales Ocean (MPAS-O). Simulations are performed at 4-, 8-, 16-, and 32-km resolution, where the first Rossby radius of deformation (RRD) is approximately 30 km. Scalar and tensor diffusivities are estimated at each resolution based on 30 ensemble members using particle cluster statistics. Each ensemble member is composed of 303 665 particles distributed across five potential density surfaces. Diffusivity dependence upon modelmore » resolution, velocity spatial scale, and buoyancy surface is quantified and compared with mixing length theory. The spatial structure of diffusivity ranges over approximately two orders of magnitude with values of O(10 5) m 2 s –1 in the region of western boundary current separation to O(10 3) m 2 s –1 in the eastern region of the basin. Dominant mixing occurs at scales twice the size of the first RRD. Model resolution at scales finer than the RRD is necessary to obtain sufficient model fidelity at scales between one and four RRD to accurately represent mixing. Mixing length scaling with eddy kinetic energy and the Lagrangian time scale yield mixing efficiencies that typically range between 0.4 and 0.8. In conclusion, a reduced mixing length in the eastern region of the domain relative to the west suggests there are different mixing regimes outside the baroclinic jet region.« less

  3. Detection of atmospheric pressure loading using very long baseline interferometry measurements

    NASA Technical Reports Server (NTRS)

    Vandam, T. M.; Herring, T. A.

    1994-01-01

    Loading of the Earth by the temporal redistribution of global atmospheric mass is likely to displace the positions of geodetic monuments by tens of millimeters both vertically and horizontally. Estimates of these displacements are determined by convolving National Meteorological Center (NMC) global values of atmospheric surface pressure with Farrell's elastic Green's functions. An analysis of the distances between radio telescopes determined by very long baseline interferometry (VLBI) between 1984 and 1992 reveals that in many of the cases studied there is a significant contribution to baseline length change due to atmospheric pressure loading. Our analysis covers intersite distances of between 1000 and 10,000 km and is restricted to those baselines measured more than 100 times. Accounting for the load effects (after first removing a best fit slope) reduces the weighted root-mean-square (WRMS) scatter of the baseline length residuals on 11 of the 22 baselines investigated. The slight degradation observed in the WRMS scatter on the remaining baselines is largely consistent with the expected statistical fluctuations when a small correction is applied to a data set having a much larger random noise. The results from all baselines are consistent with approximately 60% of the computed pressure contribution being present in the VLBI length determinations. Site dependent coefficients determined by fitting local pressure to the theoretical radial displacement are found to reproduce the deformation caused by the regional pressure to within 25% for most inland sites. The coefficients are less reliable at near coastal and island stations.

  4. Colorectal anatomy in adults at computed tomography colonography: normal distribution and the effect of age, sex, and body mass index.

    PubMed

    Khashab, M A; Pickhardt, P J; Kim, D H; Rex, D K

    2009-08-01

    Computed tomography colonography (CTC) is an accurate tool for assessing the large intestinal anatomy. Our aims were to determine the normal distribution of in vivo colorectal anatomy and to investigate the effect of age, sex, and body mass index (BMI) on colorectal length. Asymptomatic adults who underwent primary CTC examination at a single institution over an 8-month period were evaluated. The interactive three-dimensional map was used to determine total and segmental lengths and number of acute-angle flexures. The two-dimensional multiplanar display was used to measure luminal diameters. The effects of age, sex, and BMI on colorectal lengths were examined. The study cohort consisted of 505 consecutive adults (266 women, mean age 56.6 years). Mean total colorectal length was 189.5 +/- 26.3 cm and mean number of acute-angle flexures was 10.9 +/- 2.4. Total length for older adults (> 60 years) did not significantly differ from those who were younger than 60 years ( P = 0.22), although the transverse colon was significantly longer in older adults ( P = 0.04). Women had significantly longer colons than men (193.3 cm vs. 185.4 cm, P = 0.002), whereas overweight adults (BMI > 25) had significantly shorter colons compared with those with BMI

  5. Holographic screening length in a hot plasma of two sphere

    NASA Astrophysics Data System (ADS)

    Atmaja, A. Nata; Kassim, H. Abu; Yusof, N.

    2015-11-01

    We study the screening length L_{max} of a moving quark-antiquark pair in a hot plasma, which lives in a two sphere, S^2, using the AdS/CFT correspondence in which the corresponding background metric is the four-dimensional Schwarzschild-AdS black hole. The geodesic of both ends of the string at the boundary, interpreted as the quark-antiquark pair, is given by a stationary motion in the equatorial plane by which the separation length L of both ends of the string is parallel to the angular velocity ω . The screening length and total energy H of the quark-antiquark pair are computed numerically and show that the plots are bounded from below by some functions related to the momentum transfer P_c of the drag force configuration. We compare the result by computing the screening length in the reference frame of the moving quark-antiquark pair, in which the background metrics are "Boost-AdS" and Kerr-AdS black holes. Comparing both black holes, we argue that the mass parameters M_{Sch} of the Schwarzschild-AdS black hole and M_{Kerr} of the Kerr-AdS black hole are related at high temperature by M_{Kerr}=M_{Sch}(1-a^2l^2)^{3/2}, where a is the angular momentum parameter and l is the AdS curvature.

  6. Central tendency effects in time interval reproduction in autism

    PubMed Central

    Karaminis, Themelis; Cicchini, Guido Marco; Neil, Louise; Cappagli, Giulia; Aagten-Murphy, David; Burr, David; Pellicano, Elizabeth

    2016-01-01

    Central tendency, the tendency of judgements of quantities (lengths, durations etc.) to gravitate towards their mean, is one of the most robust perceptual effects. A Bayesian account has recently suggested that central tendency reflects the integration of noisy sensory estimates with prior knowledge representations of a mean stimulus, serving to improve performance. The process is flexible, so prior knowledge is weighted more heavily when sensory estimates are imprecise, requiring more integration to reduce noise. In this study we measure central tendency in autism to evaluate a recent theoretical hypothesis suggesting that autistic perception relies less on prior knowledge representations than typical perception. If true, autistic children should show reduced central tendency than theoretically predicted from their temporal resolution. We tested autistic and age- and ability-matched typical children in two child-friendly tasks: (1) a time interval reproduction task, measuring central tendency in the temporal domain; and (2) a time discrimination task, assessing temporal resolution. Central tendency reduced with age in typical development, while temporal resolution improved. Autistic children performed far worse in temporal discrimination than the matched controls. Computational simulations suggested that central tendency was much less in autistic children than predicted by theoretical modelling, given their poor temporal resolution. PMID:27349722

  7. Computer-aided marginal artery detection on computed tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wei, Zhuoshi; Yao, Jianhua; Wang, Shijun; Liu, Jiamin; Summers, Ronald M.

    2012-03-01

    Computed tomographic colonography (CTC) is a minimally invasive technique for colonic polyps and cancer screening. The marginal artery of the colon, also known as the marginal artery of Drummond, is the blood vessel that connects the inferior mesenteric artery with the superior mesenteric artery. The marginal artery runs parallel to the colon for its entire length, providing the blood supply to the colon. Detecting the marginal artery may benefit computer-aided detection (CAD) of colonic polyp. It can be used to identify teniae coli based on their anatomic spatial relationship. It can also serve as an alternative marker for colon localization, in case of colon collapse and inability to directly compute the endoluminal centerline. This paper proposes an automatic method for marginal artery detection on CTC. To the best of our knowledge, this is the first work presented for this purpose. Our method includes two stages. The first stage extracts the blood vessels in the abdominal region. The eigenvalue of Hessian matrix is used to detect line-like structures in the images. The second stage is to reduce the false positives in the first step. We used two different masks to exclude the false positive vessel regions. One is a dilated colon mask which is obtained by colon segmentation. The other is an eroded visceral fat mask which is obtained by fat segmentation in the abdominal region. We tested our method on a CTC dataset with 6 cases. Using ratio-of-overlap with manual labeling of the marginal artery as the standard-of-reference, our method yielded true positive, false positive and false negative fractions of 89%, 33%, 11%, respectively.

  8. Computed tomographic pelvimetry in English bulldogs.

    PubMed

    Dobak, Tetyda P; Voorhout, George; Vernooij, Johannes C M; Boroffka, Susanne A E B

    2018-05-31

    English bulldogs have been reported to have a high incidence of dystocia and caesarean section is often performed electively in this breed. A narrow pelvic canal is the major maternal factor contributing to obstructive dystocia. The objective of this cross-sectional study was to assess the pelvic dimensions of 40 clinically healthy English bulldogs using computed tomography pelvimetry. A control group consisting of 30 non-brachycephalic dogs that underwent pelvic computed tomography was retrospectively collected from the patient archive system. Univariate analysis of variance was used to compare computed tomography pelvimetry of both groups and the effects of weight and gender on the measurements. In addition, ratios were obtained to address pelvic shape differences. A significantly (P = 0.00) smaller pelvic size was found in English bulldogs compared to the control group for all computed tomography measurements: width and length of the pelvis, pelvic inlet and caudal pelvic aperture. The pelvic conformation was significantly different between the groups, English bulldogs had an overall shorter pelvis and pelvic canal and a narrower pelvic outlet. Weight had a significant effect on all measurements whereas gender that only had a significant effect on some (4/11) pelvic dimensions. Our findings prove that English bulldogs have a generally reduced pelvic size as well as a shorter pelvis and narrower pelvic outlet when compared to non-brachycephalic breeds. We suggest that some of our measurements may serve as a baseline for pelvic dimensions in English bulldogs and may be useful for future studies on dystocia in this breed. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. End-to-end distance and contour length distribution functions of DNA helices

    NASA Astrophysics Data System (ADS)

    Zoli, Marco

    2018-06-01

    I present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule, and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, I derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than ˜60 base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.

  10. Turbulent Boundary Layers in Oscillating Flows. Part 1: an Experimental and Computational Study

    NASA Technical Reports Server (NTRS)

    Cook, W. J.

    1986-01-01

    An experimental-computational study of the behavior of turbulent boundary layers for oscillating air flows over a plane surface with a small favorable mean pressure gradient is described. Experimental studies were conducted for boundary layers generated on the test section wall of a facility that produces a flow with a mean free stream velocity and a superposed nearly-pure sinusoidal component over a wide range of frequency. Flow at a nominal mean free stream velocity of 50 m/s were studied at atmospheric pressure and temperature at selected axial positions over a 2 m test length for frequencies ranging from 4 to 29 Hz. Quantitative experimental results are presented for unsteady velocity profiles and longitudinal turbulence levels obtained from hot wire anemometer measurements at three axial positions. Mean velocity profiles for oscillating flows were found to exhibit only small deviations from corresponding steady flow profiles, while amplitudes and phase relationships exhibited a strong dependence on axial position and frequency. Since sinusoidal flows could be generated over a wide range of frequency, studies at fixed values of reduced frequency at different axial positions were studied. Results show that there is some utility in the use of reduced frequency to correlate unsteady velocity results. The turbulence level u' sub rms was observed to vary essentially sinusoidally around values close to those measured in steady flow. However, the amplitude of oscillation and phase relations for turbulence level were found to be strongly frequency dependent. Numerical predictions were obtained using an unsteady boundary layer computational code and the Cebeci-Smith and Glushko turbulence models. Predicted quantities related to unsteady velocity profiles exhibit fair agreement with experiment when the Cebeci-Smith turbulence model is used.

  11. Reducing pre-operative length of stay for enterocutaneous fistula repair with a multi-disciplinary approach.

    PubMed

    Chamberlain, Mark; Dwyer, Rebecca

    2015-01-01

    Pre-operative assessment of complex surgical patients can be a lengthy process, albeit essential to minimise complication rates. In a tertiary referral unit specialising in the surgical repair of entercutaneous fistulas, a baseline audit revealed an average in-patient length of stay of 30.1 days, mainly caused by poor co-ordination between specialities. After the introduction of a weekly multi-disciplinary team meeting and the formalisation of a patient pathway, this admission length was reduced to 5.7 days (p<0.01), resulting in significant savings to the department.

  12. Structure of an E. coli integral membrane sulfurtransferase and its structural transition upon SCN- binding defined by EPR-based hybrid method

    NASA Astrophysics Data System (ADS)

    Ling, Shenglong; Wang, Wei; Yu, Lu; Peng, Junhui; Cai, Xiaoying; Xiong, Ying; Hayati, Zahra; Zhang, Longhua; Zhang, Zhiyong; Song, Likai; Tian, Changlin

    2016-01-01

    Electron paramagnetic resonance (EPR)-based hybrid experimental and computational approaches were applied to determine the structure of a full-length E. coli integral membrane sulfurtransferase, dimeric YgaP, and its structural and dynamic changes upon ligand binding. The solution NMR structures of the YgaP transmembrane domain (TMD) and cytosolic catalytic rhodanese domain were reported recently, but the tertiary fold of full-length YgaP was not yet available. Here, systematic site-specific EPR analysis defined a helix-loop-helix secondary structure of the YagP-TMD monomers using mobility, accessibility and membrane immersion measurements. The tertiary folds of dimeric YgaP-TMD and full-length YgaP in detergent micelles were determined through inter- and intra-monomer distance mapping and rigid-body computation. Further EPR analysis demonstrated the tight packing of the two YgaP second transmembrane helices upon binding of the catalytic product SCN-, which provides insight into the thiocyanate exportation mechanism of YgaP in the E. coli membrane.

  13. Computer analysis of the leaf movements of pinto beans.

    PubMed

    Hoshizaki, T; Hamner, K C

    1969-07-01

    Computer analysis was used for the detection of rhythmic components and the estimation of period length in leaf movement records. The results of this study indicated that spectral analysis can be profitably used to determine rhythmic components in leaf movements.In Pinto bean plants (Phaseolus vulgaris L.) grown for 28 days under continuous light of 750 ft-c and at a constant temperature of 28 degrees , there was only 1 highly significant rhythmic component in the leaf movements. The period of this rhythm was 27.3 hr. In plants grown at 20 degrees , there were 2 highly significant rhythmic components: 1 of 13.8 hr and a much stronger 1 of 27.3 hr. At 15 degrees , the highly significant rhythmic components were also 27.3 and 13.8 hr in length but were of equal intensity. Random movements less than 9 hr in length became very pronounced at this temperature. At 10 degrees , no significant rhythm was found in the leaf movements. At 5 degrees , the leaf movements ceased within 1 day.

  14. Investigation of Readout RF Pulse Impact on the Chemical Exchange Saturation Transfer Spectrum

    PubMed Central

    Huang, Sheng-Min; Jan, Meei-Ling; Liang, Hsin-Chin; Chang, Chia-Hao; Wu, Yi-Chun; Tsai, Shang-Yueh; Wang, Fu-Nien

    2015-01-01

    Chemical exchange saturation transfer magnetic resonance imaging (CEST-MRI) is capable of both microenvironment and molecular imaging. The optimization of scanning parameters is important since the CEST effect is sensitive to factors such as saturation power and field homogeneity. The aim of this study was to determine if the CEST effect would be altered by changing the length of readout RF pulses. Both theoretical computer simulation and phantom experiments were performed to examine the influence of readout RF pulses. Our results showed that the length of readout RF pulses has unremarkable impact on the Z-spectrum and CEST effect in both computer simulation and phantom experiment. Moreover, we demonstrated that multiple refocusing RF pulses used in rapid acquisition with relaxation enhancement (RARE) sequence induced no obvious saturation transfer contrast. Therefore, readout RF pulse has negligible effect on CEST Z-spectrum and the optimization of readout RF pulse length can be disregarded in CEST imaging protocol. PMID:26455576

  15. Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions

    NASA Astrophysics Data System (ADS)

    Leiderman, Karin; Olson, Sarah D.

    2016-02-01

    The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.

  16. Evaluation of Shape and Textural Features from CT as Prognostic Biomarkers in Non-small Cell Lung Cancer.

    PubMed

    Bianconi, Francesco; Fravolini, Mario Luca; Bello-Cerezo, Raquel; Minestrini, Matteo; Scialpi, Michele; Palumbo, Barbara

    2018-04-01

    We retrospectively investigated the prognostic potential (correlation with overall survival) of 9 shape and 21 textural features from non-contrast-enhanced computed tomography (CT) in patients with non-small-cell lung cancer. We considered a public dataset of 203 individuals with inoperable, histologically- or cytologically-confirmed NSCLC. Three-dimensional shape and textural features from CT were computed using proprietary code and their prognostic potential evaluated through four different statistical protocols. Volume and grey-level run length matrix (GLRLM) run length non-uniformity were the only two features to pass all four protocols. Both features correlated negatively with overall survival. The results also showed a strong dependence on the evaluation protocol used. Tumour volume and GLRLM run-length non-uniformity from CT were the best predictor of survival in patients with non-small-cell lung cancer. We did not find enough evidence to claim a relationship with survival for the other features. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  17. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  18. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs

    PubMed Central

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n 2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k 2 n 2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652

  19. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.

    PubMed

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.

  20. Efficiency of the neighbor-joining method in reconstructing deep and shallow evolutionary relationships in large phylogenies.

    PubMed

    Kumar, S; Gadagkar, S R

    2000-12-01

    The neighbor-joining (NJ) method is widely used in reconstructing large phylogenies because of its computational speed and the high accuracy in phylogenetic inference as revealed in computer simulation studies. However, most computer simulation studies have quantified the overall performance of the NJ method in terms of the percentage of branches inferred correctly or the percentage of replications in which the correct tree is recovered. We have examined other aspects of its performance, such as the relative efficiency in correctly reconstructing shallow (close to the external branches of the tree) and deep branches in large phylogenies; the contribution of zero-length branches to topological errors in the inferred trees; and the influence of increasing the tree size (number of sequences), evolutionary rate, and sequence length on the efficiency of the NJ method. Results show that the correct reconstruction of deep branches is no more difficult than that of shallower branches. The presence of zero-length branches in realized trees contributes significantly to the overall error observed in the NJ tree, especially in large phylogenies or slowly evolving genes. Furthermore, the tree size does not influence the efficiency of NJ in reconstructing shallow and deep branches in our simulation study, in which the evolutionary process is assumed to be homogeneous in all lineages.

  1. Lepidium meyenii (Maca) reduces spermatogenic damage induced by a single dose of malathion in mice.

    PubMed

    Bustos-Obregon, Eduardo; Yucra, Sandra; Gonzales, Gustavo F

    2005-03-01

    To observe the effect of the aqueous extract of hypocotyls of the plant Lepidium meyenii (Maca) on spermatogenic damage induced by the organophosphate insecticide malathion in mice. Mice were treated with 80 mg/kg of malathion in the presence or absence of an aqueous extract of Maca, which was orally administered 7, 14 or 21 days after injection of the malathion. Stages of the seminiferous epithelium were assessed by transillumination on days 0, 7, 14 and 21. The administration of Maca increased significantly the length of stage VIII on days 7, 14 and 21 of treatment compared with the controls. An increase in the length of stage IX occurred on day 14 of treatment. Malathion affected spermatogenesis by reducing the lengths of stage IX on day 7, stages VII and IX-XI on day 14 and a recovery of stages IX-XII on day 21. The magnitude of alteration in the length of stage IX produced by malathion was significantly reduced by Maca on days 7 and 14. The length of stage VIII was increased when Maca was administered to mice treated with malathion. Assessment of the relative length of stages of the seminiferous epithelium showed that Maca treatment resulted in rapid recovery of the effect of malathion. Maca enhances spermatogenesis following spermatogenic damage caused by the organophosphorous pesticide.

  2. RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation

    Treesearch

    D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen

    1980-01-01

    Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.

  3. Dr. Sanger's Apprentice: A Computer-Aided Instruction to Protein Sequencing.

    ERIC Educational Resources Information Center

    Schmidt, Thomas G.; Place, Allen R.

    1985-01-01

    Modeled after the program "Mastermind," this program teaches students the art of protein sequencing. The program (written in Turbo Pascal for the IBM PC, requiring 128K, a graphics adapter, and an 8070 mathematics coprocessor) generates a polypeptide whose sequence and length can be user-defined (for practice) or computer-generated (for…

  4. Predicting Lexical Proficiency in Language Learner Texts Using Computational Indices

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Salsbury, Tom; McNamara, Danielle S.; Jarvis, Scott

    2011-01-01

    The authors present a model of lexical proficiency based on lexical indices related to vocabulary size, depth of lexical knowledge, and accessibility to core lexical items. The lexical indices used in this study come from the computational tool Coh-Metrix and include word length scores, lexical diversity values, word frequency counts, hypernymy…

  5. The Semantic Distance Task: Quantifying Semantic Distance with Semantic Network Path Length

    ERIC Educational Resources Information Center

    Kenett, Yoed N.; Levi, Effi; Anaki, David; Faust, Miriam

    2017-01-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We…

  6. Mesoscale Models of Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.

    During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.

  7. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less

  8. Computational study of the effects of shroud geometric variation on turbine performance in a 1.5-stage high-loaded turbine

    NASA Astrophysics Data System (ADS)

    Jia, Wei; Liu, Huoxing

    2013-10-01

    Generally speaking, main flow path of gas turbine is assumed to be perfect for standard 3D computation. But in real engine, the turbine annulus geometry is not completely smooth for the presence of the shroud and associated cavity near the end wall. Besides, shroud leakage flow is one of the dominant sources of secondary flow in turbomachinery, which not only causes a deterioration of useful work but also a penalty on turbine efficiency. It has been found that neglect shroud leakage flow makes the computed velocity profiles and loss distribution significantly different to those measured. Even so, the influence of shroud leakage flow is seldom taken into consideration during the routine of turbine design due to insufficient understanding of its impact on end wall flows and turbine performance. In order to evaluate the impact of tip shroud geometry on turbine performance, a 3D computational investigation for 1.5-stage turbine with shrouded blades was performed in this paper. The following geometry parameters were varied respectively: Inlet cavity length and exit cavity length

  9. The use of reinforced composite resin cement as compensation for reduced post length.

    PubMed

    Nissan, J; Dmitry, Y; Assif, D

    2001-09-01

    Cements that yield high retentive values are believed to allow use of shorter posts. This study investigated the use of reinforced composite resin cement as compensation for reduced dowel length. The retention values of stainless steel posts (parallel-sided ParaPost and tapered Dentatus in 5-, 8-, and 10-mm lengths) luted with Flexi-Flow titanium-reinforced composite resin and zinc phosphate cements were evaluated. Single-rooted extracted human teeth with crowns (n = 120), removed at the cementoenamel junction, were randomly divided into 4 groups of 30 samples each. Different post lengths were luted with either Flexi-Flow or zinc phosphate. Each sample was placed into a specialized jig and on a tensile testing machine with a crosshead speed of 2 mm/min, applied until failure. The effect of different posts and cements on the force required to dislodge the dowels was evaluated with multiple analyses of variance (ANOVA). One-way ANOVA with Scheffé contrast was applied to determine the effect of different post lengths on the retentive failure of posts luted with the 2 agents. Flexi-Flow reinforced composite resin cement significantly increased retention of ParaPost and Dentatus dowels (P<.001) compared with zinc phosphate. One-way ANOVA revealed no statistically significant difference (P>.05) between mean retention of both dowels luted with Flexi-Flow for all posts length used (5 mm = 8 mm = 10 mm). Mean retention values of the groups luted with zinc phosphate showed a statistically significant difference (P<.001) for the different post lengths (10 > 8 > 5 mm). Parallel-sided ParaPost dowels demonstrated a higher mean retention than tapered Dentatus dowels (P<.001). In this study, Flexi-Flow reinforced composite resin cement compensated for the reduced length of shorter parallel-sided ParaPost and tapered Dentatus dowels.

  10. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  11. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  12. Time and length scales within a fire and implications for numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TIESZEN,SHELDON R.

    2000-02-02

    A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less

  13. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  14. Airbreathing Propulsion System Analysis Using Multithreaded Parallel Processing

    NASA Technical Reports Server (NTRS)

    Schunk, Richard Gregory; Chung, T. J.; Rodriguez, Pete (Technical Monitor)

    2000-01-01

    In this paper, parallel processing is used to analyze the mixing, and combustion behavior of hypersonic flow. Preliminary work for a sonic transverse hydrogen jet injected from a slot into a Mach 4 airstream in a two-dimensional duct combustor has been completed [Moon and Chung, 1996]. Our aim is to extend this work to three-dimensional domain using multithreaded domain decomposition parallel processing based on the flowfield-dependent variation theory. Numerical simulations of chemically reacting flows are difficult because of the strong interactions between the turbulent hydrodynamic and chemical processes. The algorithm must provide an accurate representation of the flowfield, since unphysical flowfield calculations will lead to the faulty loss or creation of species mass fraction, or even premature ignition, which in turn alters the flowfield information. Another difficulty arises from the disparity in time scales between the flowfield and chemical reactions, which may require the use of finite rate chemistry. The situations are more complex when there is a disparity in length scales involved in turbulence. In order to cope with these complicated physical phenomena, it is our plan to utilize the flowfield-dependent variation theory mentioned above, facilitated by large eddy simulation. Undoubtedly, the proposed computation requires the most sophisticated computational strategies. The multithreaded domain decomposition parallel processing will be necessary in order to reduce both computational time and storage. Without special treatments involved in computer engineering, our attempt to analyze the airbreathing combustion appears to be difficult, if not impossible.

  15. Characterization of Femoral Component Initial Stability and Cortical Strain in a Reduced Stem-Length Design.

    PubMed

    Small, Scott R; Hensley, Sarah E; Cook, Paige L; Stevens, Rebecca A; Rogge, Renee D; Meding, John B; Berend, Michael E

    2017-02-01

    Short-stemmed femoral components facilitate reduced exposure surgical techniques while preserving native bone. A clinically successful stem should ideally reduce risk for stress shielding while maintaining adequate primary stability for biological fixation. We asked (1) how stem-length changes cortical strain distribution in the proximal femur in a fit-and-fill geometry and (2) if short-stemmed components exhibit primary stability on par with clinically successful designs. Cortical strain was assessed via digital image correlation in composite femurs implanted with long, medium, and short metaphyseal fit-and-fill stem designs in a single-leg stance loading model. Strain was compared to a loaded, unimplanted femur. Bone-implant micromotion was then compared with reduced lateral shoulder short stem and short tapered-wedge designs in cyclic axial and torsional testing. Femurs implanted with short-stemmed components exhibited cortical strain response most closely matching that of the intact femur model, theoretically reducing the potential for proximal stress shielding. In micromotion testing, no difference in primary stability was observed as a function of reduced stem length within the same component design. Our findings demonstrate that within this fit-and-fill stem design, reduction in stem length improved proximal cortical strain distribution and maintained axial and torsional stability on par with other stem designs in a composite femur model. Short-stemmed implants may accommodate less invasive surgical techniques while facilitating more physiological femoral loading without sacrificing primary implant stability. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Long length cuttings from no. 2 common hardwood lumber

    Treesearch

    Edwin L. Lucas; Edwin L. Lucas

    1973-01-01

    Long length cuttings (up to 60 inches) are obtainable in abundance from No. 2 Common oak lumber. Cutting for the maximum area of clear one face (ClF) parts 18 to 60 inches in length, we found that 46 percent of all the cuttings were 36 inches long or longer. The recovery of the long length cuttings did not reduce the overall yield of parts produced from the lumber....

  17. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  18. Data bank for short-length red oak lumber

    Treesearch

    Janice K. Wiedenbeck; Charles J. Gatchell; Elizabeth S. Walker

    1994-01-01

    This data bank for short-length lumber (less than 8 feet long) contains information on board outlines and defect size and quality for 426 414-inch-thick red oak boards. The Selects, 1 Common, 2A Common, and 3A Common grades are represented in the data bank. The data bank provides the kind of detailed lumber description that is required as input by computer programs...

  19. Critical transition in the constrained traveling salesman problem.

    PubMed

    Andrecut, M; Ali, M K

    2001-04-01

    We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at rho(c) approximately 85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.

  20. Sensitivity of a computer adaptive assessment for measuring functional mobility changes in children enrolled in a community fitness programme.

    PubMed

    Haley, Stephen M; Fragala-Pinkham, Maria; Ni, Pengsheng

    2006-07-01

    To examine the relative sensitivity to detect functional mobility changes with a full-length parent questionnaire compared with a computerized adaptive testing version of the questionnaire after a 16-week group fitness programme. Prospective, pre- and posttest study with a 16-week group fitness intervention. Three community-based fitness centres. Convenience sample of children (n = 28) with physical or developmental disabilities. A 16-week group exercise programme held twice a week in a community setting. A full-length (161 items) paper version of a mobility parent questionnaire based on the Pediatric Evaluation of Disability Inventory, but expanded to include expected skills of children up to 15 years old was compared with a 15-item computer adaptive testing version. Both measures were administered at pre- and posttest intervals. Both the full-length Pediatric Evaluation of Disability Inventory and the 15-item computer adaptive testing version detected significant changes between pre- and posttest scores, had large effect sizes, and standardized response means, with a modest decrease in the computer adaptive test as compared with the 161-item paper version. Correlations between the computer adaptive and paper formats across pre- and posttest scores ranged from r = 0.76 to 0.86. Both functional mobility test versions were able to detect positive functional changes at the end of the intervention period. Greater variability in score estimates was generated by the computerized adaptive testing version, which led to a relative reduction in sensitivity as defined by the standardized response mean. Extreme scores were generally more difficult for the computer adaptive format to estimate with as much accuracy as scores in the mid-range of the scale. However, the reduction in accuracy and sensitivity, which did not influence the group effect results in this study, is counterbalanced by the large reduction in testing burden.

  1. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  2. Estimating the hemodynamic influence of variable main body-to-iliac limb length ratios in aortic endografts.

    PubMed

    Georgakarakos, Efstratios; Xenakis, Antonios; Georgiadis, George S

    2018-02-01

    We conducted a computational study to assess the hemodynamic impact of variant main body-to-iliac limb length (L1/L2) ratios on certain hemodynamic parameters acting on the endograft (EG) either on the normal bifurcated (Bif) or the cross-limb (Cx) fashion. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. The total length of the EG, was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5 in the Bif and Cx reconstructed EG models. The compliance of the graft was modeled using a Fluid Structure Interaction method. Important hemodynamic parameters such as pressure drop along EG, wall shear stress (WSS) and helicity were calculated. The greatest pressure decrease across EG was calculated in the peak systolic phase. With increasing L1/L2 it was found that the Pressure Drop was increasing for the Cx configuration, while decreasing for the Bif. The greatest helicity (4.1 m/s2) was seen in peak systole of Cx with ratio of 1.5 whereas its greatest value (2 m/s2) was met in peak systole in the Bif with the shortest L1/L2 ratio (0.3). Similarly, the maximum WSS value was highest (2.74Pa) in the peak systole for the 1.5 L1/L2 of the Cx configuration, while the maximum WSS value equaled 2 Pa for all length ratios of the Bif modification (with the WSS found for L1/L2=0.3 being marginally higher). There was greater discrepancy in the WSS values for all L1/L2 ratios of the Cx bifurcation compared to Bif. Different L1/L2 rations are shown to have an impact on the pressure distribution along the entire EG while the length ratio predisposing to highest helicity or WSS values is also determined by the iliac limbs pattern of the EG. Since current custom-made EG solutions can reproduce variability in main-body/iliac limbs length ratios, further computational as well as clinical research is warranted to delineate and predict the hemodynamic and clinical effect of variable length ratios.

  3. Performance of freestanding inpatient rehabilitation hospitals before and after the rehabilitation prospective payment system.

    PubMed

    Thompson, Jon M; McCue, Michael J

    2010-01-01

    Inpatient rehabilitation hospitals provide important services to patients to restore physical and cognitive functioning. Historically, these hospitals have been reimbursed by Medicare under a cost-based system; but in 2002, Medicare implemented a rehabilitation prospective payment system (PPS). Despite the implementation of a PPS for rehabilitation, there is limited published research that addresses the operating and financial performance of these hospitals. We examined operating and financial performance in the pre- and post-PPS periods for for-profit and nonprofit freestanding inpatient rehabilitation hospitals to test for pre- and post-PPS differences within the ownership groups. We identified freestanding inpatient rehabilitation hospitals from the Centers for Medicare and Medicaid Services Health Care Cost Report Information System database for the first two fiscal years under PPS. We excluded facilities that had fiscal years less than 270 days, facilities with missing data, and government facilities. We computed average values for performance variables for the facilities in the two consecutive fiscal years post-PPS. For the pre-PPS period, we collected data on these same facilities and, once facilities with missing data and fiscal years less than 270 days were excluded, computed average values for the two consecutive fiscal years pre-PPS. Our final sample of 140 inpatient rehabilitation facilities was composed of 44 nonprofit hospitals and 96 for-profit hospitals both pre- and post-PPS. We utilized a pairwise comparison test (t-test comparison) to measure the significance of differences on each performance variable between pre- and post-PPS periods within each ownership group. Findings show that both nonprofit and for-profit freestanding inpatient rehabilitation hospitals reduced length of stay, increased discharges, and increased profitability. Within the for-profit ownership group, the percentage of Medicare discharges increased and operating expense per adjusted discharge decreased. Findings suggest that managers of these hospitals have adapted their administrative practices to conform with the financial incentives of the rehabilitation PPS. Managers must continue to control costs, increase discharges, and reduce length of stay to remain financially viable under the rehabilitation PPS.

  4. Preoperative exercise halves the postoperative complication rate in patients with lung cancer: a systematic review of the effect of exercise on complications, length of stay and quality of life in patients with cancer.

    PubMed

    Steffens, Daniel; Beckenkamp, Paula R; Hancock, Mark; Solomon, Michael; Young, Jane

    2018-03-01

    To investigate the effectiveness of preoperative exercises interventions in patients undergoing oncological surgery, on postoperative complications, length of hospital stay and quality of life. Intervention systematic review with meta-analysis. MEDLINE, Embase and PEDro. Trials investigating the effectiveness of preoperative exercise for any oncological patient undergoing surgery were included. The outcomes of interest were postoperative complications, length of hospital stay and quality of life. Relative risks (RRs), mean differences (MDs) and 95% CI were calculated using random-effects models. Seventeen articles (reporting on 13 different trials) involving 806 individual participants and 6 tumour types were included. There was moderate-quality evidence that preoperative exercise significantly reduced postoperative complication rates (RR 0.52, 95% CI 0.36 to 0.74) and length of hospital stay (MD -2.86 days, 95% CI -5.40 to -0.33) in patients undergoing lung resection, compared with control. For patients with oesophageal cancer, preoperative exercise was not effective in reducing length of hospital stay (MD 2.00 days, 95% CI -2.35 to 6.35). Although only assessed in individual studies, preoperative exercise improved postoperative quality of life in patients with oral or prostate cancer. No effect was found in patients with colon and colorectal liver metastases. Preoperative exercise was effective in reducing postoperative complications and length of hospital stay in patients with lung cancer. Whether preoperative exercise reduces complications, length of hospital stay and improves quality of life in other groups of patients undergoing oncological surgery is uncertain as the quality of evidence is low. PROSPEROREGISTRATION NUMBER. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. A Theoretical Study of Flow Structure and Radiation for Multiphase Turbulent Diffusion Flames

    DTIC Science & Technology

    1990-03-01

    density function. According to the axial void fraction profile in Fig. 24, the flame length (the total penetration length) extends to x/d=150. By referring...temperature because of subcooling effect. Decreasing liquid temperature will increase condensation which in turn reduces the flame length as defined by

  6. A structural equation model relating impaired sensorimotor function, fear of falling and gait patterns in older people.

    PubMed

    Menz, Hylton B; Lord, Stephen R; Fitzpatrick, Richard C

    2007-02-01

    Many falls in older people occur while walking, however the mechanisms responsible for gait instability are poorly understood. Therefore, the aim of this study was to develop a plausible model describing the relationships between impaired sensorimotor function, fear of falling and gait patterns in older people. Temporo-spatial gait parameters and acceleration patterns of the head and pelvis were obtained from 100 community-dwelling older people aged between 75 and 93 years while walking on an irregular walkway. A theoretical model was developed to explain the relationships between these variables, assuming that head stability is a primary output of the postural control system when walking. This model was then tested using structural equation modeling, a statistical technique which enables the testing of a set of regression equations simultaneously. The structural equation model indicated that: (i) reduced step length has a significant direct and indirect association with reduced head stability; (ii) impaired sensorimotor function is significantly associated with reduced head stability, but this effect is largely indirect, mediated by reduced step length, and; (iii) fear of falling is significantly associated with reduced step length, but has little direct influence on head stability. These findings provide useful insights into the possible mechanisms underlying gait characteristics and risk of falling in older people. Particularly important is the indication that fear-related step length shortening may be maladaptive.

  7. Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model

    PubMed Central

    Ito, Shinya; Hansen, Michael E.; Heiland, Randy; Lumsdaine, Andrew; Litke, Alan M.; Beggs, John M.

    2011-01-01

    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons. PMID:22102894

  8. Edge length dynamics on graphs with applications to p-adic AdS/CFT

    DOE PAGES

    Gubser, Steven S.; Heydeman, Matthew; Jepsen, Christian; ...

    2017-06-30

    We formulate a Euclidean theory of edge length dynamics based on a notion of Ricci curvature on graphs with variable edge lengths. In order to write an explicit form for the discrete analog of the Einstein-Hilbert action, we require that the graph should either be a tree or that all its cycles should be sufficiently long. The infinite regular tree with all edge lengths equal is an example of a graph with constant negative curvature, providing a connection with p-adic AdS/CFT, where such a tree takes the place of anti-de Sitter space. Here, we compute simple correlators of the operatormore » holographically dual to edge length fluctuations. This operator has dimension equal to the dimension of the boundary, and it has some features in common with the stress tensor.« less

  9. Edge length dynamics on graphs with applications to p-adic AdS/CFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gubser, Steven S.; Heydeman, Matthew; Jepsen, Christian

    We formulate a Euclidean theory of edge length dynamics based on a notion of Ricci curvature on graphs with variable edge lengths. In order to write an explicit form for the discrete analog of the Einstein-Hilbert action, we require that the graph should either be a tree or that all its cycles should be sufficiently long. The infinite regular tree with all edge lengths equal is an example of a graph with constant negative curvature, providing a connection with p-adic AdS/CFT, where such a tree takes the place of anti-de Sitter space. Here, we compute simple correlators of the operatormore » holographically dual to edge length fluctuations. This operator has dimension equal to the dimension of the boundary, and it has some features in common with the stress tensor.« less

  10. Optimization of NTP System Truss to Reduce Radiation Shield Mass

    NASA Technical Reports Server (NTRS)

    Scharber, Luke L.; Kharofa, Adam; Caffrey, Jarvis A.

    2016-01-01

    The benefits of nuclear thermal propulsion are numerous and relevant to the current NASA mission goals involving but not limited to the crewed missions to mars and the moon. They do however also present new and unique challenges to the design and logistics of launching/operating spacecraft. One of these challenges, relevant to this discussion, is the significant mass of the shielding which is required to ensure an acceptable radiation environment for the spacecraft and crew. Efforts to reduce shielding mass are difficult to accomplish from material and geometric design points of the shield itself, however by increasing the distance between the nuclear engines and the main body of the spacecraft the required mass of the shielding is lessened considerably. The mass can be reduced significantly per unit length, though any additional mass added by the structure to create this distance serves to offset those savings, thus the design of a lightweight structure is ideal. The challenges of designing the truss are bounded by several limiting factors including; the loading conditions, the capabilities of the launch vehicle, and achieving the ideal truss length when factoring for the overall mass reduced. Determining the overall set of mass values for a truss of varying length is difficult since to maintain an optimally designed truss the geometry of the truss or its members must change. Thus the relation between truss mass and length for these loading scenarios is not linear, and instead has relation determined by the truss design. In order to establish a mass versus length trend for various truss designs to compare with the mass saved from the shield versus length, optimization software was used to find optimal geometric properties that still met the design requirements at established lengths. By solving for optimal designs at various lengths, mass trends could be determined. The initial design findings show a clear benefit to extending the engines as far from the main structure of the spacecraft as the launch vehicle's payload volume would allow when comparing mass savings verse the additional structure.

  11. NASA geodynamics program investigations summaries: A supplement to the NASA geodynamics program overview

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The development of a time series of global atmospheric motion and mass fields through April 1984 to compare with changes in length of day and polar motion was investigated. Earth rotation was studied and the following topics are discussed: (1) computation of atmospheric angular momentum through April 1984; (2) comparisons of psi sub values with variations in length of day obtained by several groups utilizing B.I.H., lunar laser ranging, VLBI, or Lageos measurements; (3) computation of atmospheric excitation of polar motion using daily fields of atmospheric winds and pressures for a short test period. Daily calculations may be extended over a longer period to examine the forcing of the annual and Chandler wobbles, in addition to higher frequency nutations.

  12. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  13. Analysis of high injection pressure and ambient temperature on biodiesel spray characteristics using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Hashim, Akasha; Khalid, Amir; Jaat, Norrizam; Sapit, Azwan; Razali, Azahari; Nizam, Akmal

    2017-09-01

    Efficiency of combustion engines are highly affected by the formation of air-fuel mixture prior to ignition and combustion process. This research investigate the mixture formation and spray characteristics of biodiesel blends under variant in high ambient and injection conditions using Computational Fluid Dynamics (CFD). The spray characteristics such as spray penetration length, spray angle and fluid flow were observe under various operating conditions. Results show that increase in injection pressure increases the spray penetration length for both biodiesel and diesel. Results also indicate that higher spray angle of biodiesel can be seen as the injection pressure increases. This study concludes that spray characteristics of biodiesel blend is greatly affected by the injection and ambient conditions.

  14. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE PAGES

    Vincenti, H.; Lobet, M.; Lehe, R.; ...

    2016-09-19

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  15. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Lobet, M.; Lehe, R.

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  16. Validation of CBCT for the computation of textural biomarkers

    NASA Astrophysics Data System (ADS)

    Paniagua, Beatriz; Ruellas, Antonio C.; Benavides, Erika; Marron, Steve; Wolford, Larry; Cevidanes, Lucia

    2015-03-01

    Osteoarthritis (OA) is associated with significant pain and 42.6% of patients with TMJ disorders present with evidence of TMJ OA. However, OA diagnosis and treatment remain controversial, since there are no clear symptoms of the disease. The subchondral bone in the TMJ is believed to play a major role in the progression of OA. We hypothesize that the textural imaging biomarkers computed in high resolution Conebeam CT (hr- CBCT) and μCT scans are comparable. The purpose of this study is to test the feasibility of computing textural imaging biomarkers in-vivo using hr-CBCT, compared to those computed in μCT scans as our Gold Standard. Specimens of condylar bones obtained from condylectomies were scanned using μCT and hr- CBCT. Nine different textural imaging biomarkers (four co-occurrence features and five run-length features) from each pair of μCT and hr-CBCT were computed and compared. Pearson correlation coefficients were computed to compare textural biomarkers values of μCT and hr-CBCT. Four of the nine computed textural biomarkers showed a strong positive correlation between biomarkers computed in μCT and hr-CBCT. Higher correlations in Energy and Contrast, and in GLN (grey-level non-uniformity) and RLN (run length non-uniformity) indicate quantitative texture features can be computed reliably in hr-CBCT, when compared with μCT. The textural imaging biomarkers computed in-vivo hr-CBCT have captured the structure, patterns, contrast between neighboring regions and uniformity of healthy and/or pathologic subchondral bone. The ability to quantify bone texture non-invasively now makes it possible to evaluate the progression of subchondral bone alterations, in TMJ OA.

  17. Validation of CBCT for the computation of textural biomarkers

    PubMed Central

    Paniagua, Beatriz; Ruellas, Antonio Carlos; Benavides, Erika; Marron, Steve; Woldford, Larry; Cevidanes, Lucia

    2015-01-01

    Osteoarthritis (OA) is associated with significant pain and 42.6% of patients with TMJ disorders present with evidence of TMJ OA. However, OA diagnosis and treatment remain controversial, since there are no clear symptoms of the disease. The subchondral bone in the TMJ is believed to play a major role in the progression of OA. We hypothesize that the textural imaging biomarkers computed in high resolution Conebeam CT (hr-CBCT) and μCT scans are comparable. The purpose of this study is to test the feasibility of computing textural imaging biomarkers in-vivo using hr-CBCT, compared to those computed in μCT scans as our Gold Standard. Specimens of condylar bones obtained from condylectomies were scanned using μCT and hr-CBCT. Nine different textural imaging biomarkers (four co-occurrence features and five run-length features) from each pair of μCT and hr-CBCT were computed and compared. Pearson correlation coefficients were computed to compare textural biomarkers values of μCT and hr-CBCT. Four of the nine computed textural biomarkers showed a strong positive correlation between biomarkers computed in μCT and hr-CBCT. Higher correlations in Energy and Contrast, and in GLN (grey-level non-uniformity) and RLN (run length non-uniformity) indicate quantitative texture features can be computed reliably in hr-CBCT, when compared with μCT. The textural imaging biomarkers computed in-vivo hr-CBCT have captured the structure, patterns, contrast between neighboring regions and uniformity of healthy and/or pathologic subchondral bone. The ability to quantify bone texture non-invasively now makes it possible to evaluate the progression of subchondral bone alterations, in TMJ OA. PMID:26085710

  18. Validation of CBCT for the computation of textural biomarkers.

    PubMed

    Paniagua, Beatriz; Ruellas, Antonio Carlos; Benavides, Erika; Marron, Steve; Woldford, Larry; Cevidanes, Lucia

    2015-03-17

    Osteoarthritis (OA) is associated with significant pain and 42.6% of patients with TMJ disorders present with evidence of TMJ OA. However, OA diagnosis and treatment remain controversial, since there are no clear symptoms of the disease. The subchondral bone in the TMJ is believed to play a major role in the progression of OA. We hypothesize that the textural imaging biomarkers computed in high resolution Conebeam CT (hr-CBCT) and μCT scans are comparable. The purpose of this study is to test the feasibility of computing textural imaging biomarkers in-vivo using hr-CBCT, compared to those computed in μCT scans as our Gold Standard. Specimens of condylar bones obtained from condylectomies were scanned using μCT and hr-CBCT. Nine different textural imaging biomarkers (four co-occurrence features and five run-length features) from each pair of μCT and hr-CBCT were computed and compared. Pearson correlation coefficients were computed to compare textural biomarkers values of μCT and hr-CBCT. Four of the nine computed textural biomarkers showed a strong positive correlation between biomarkers computed in μCT and hr-CBCT. Higher correlations in Energy and Contrast, and in GLN (grey-level non-uniformity) and RLN (run length non-uniformity) indicate quantitative texture features can be computed reliably in hr-CBCT, when compared with μCT. The textural imaging biomarkers computed in-vivo hr-CBCT have captured the structure, patterns, contrast between neighboring regions and uniformity of healthy and/or pathologic subchondral bone. The ability to quantify bone texture non-invasively now makes it possible to evaluate the progression of subchondral bone alterations, in TMJ OA.

  19. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  20. Thermally induced charge current through long molecules

    NASA Astrophysics Data System (ADS)

    Zimbovskaya, Natalya A.; Nitzan, Abraham

    2018-01-01

    In this work, we theoretically study steady state thermoelectric transport through a single-molecule junction with a long chain-like bridge. Electron transmission through the system is computed using a tight-binding model for the bridge. We analyze dependences of thermocurrent on the bridge length in unbiased and biased systems operating within and beyond the linear response regime. It is shown that the length-dependent thermocurrent is controlled by the lineshape of electron transmission in the interval corresponding to the HOMO/LUMO transport channel. Also, it is demonstrated that electron interactions with molecular vibrations may significantly affect the length-dependent thermocurrent.

  1. Initial experience of using an iron-containing fiducial marker for radiotherapy of prostate cancer: Advantages in the visualization of markers in Computed Tomography and Magnetic Resonance Imaging

    NASA Astrophysics Data System (ADS)

    Tanaka, Osamu; Iida, Takayoshi; Komeda, Hisao; Tamaki, Masayoshi; Seike, Kensaku; Kato, Daiki; Yokoyama, Takamasa; Hirose, Shigeki; Kawaguchi, Daisuke

    2016-12-01

    Visualization of markers is critical for imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). However, the size of the marker varies according to the imaging technique. While a large-sized marker is more useful for visualization in MRI, it results in artifacts on CT and causes substantial pain on administration. In contrast, a small-sized marker reduces the artifacts on CT but hampers MRI detection. Herein, we report a new ironcontaining marker and compare its utility with that of non-iron-containing markers. Five patients underwent CT/MRI fusion-based intensity-modulated radiotherapy, and the markers were placed by urologists. A Gold Anchor™ (GA; diameter, 0.28 mm; length, 10 mm) was placed using a 22G needle on the right side of the prostate. A VISICOIL™ (VIS; diameter, 0.35 mm; length, 10 mm) was placed using a 19G needle on the left side. MRI was performed using T2*-weighted imaging. Three observers evaluated and scored the visual qualities of the acquired images. The mean score of visualization was almost identical between the GA and VIS in radiography and cone-beam CT (Novalis Tx). The artifacts in planning CT were slightly larger using the GA than using the VIS. The visualization of the marker on MRI using the GA was superior to that using the VIS. In conclusion, the visualization quality of radiography, conebeam CT, and planning CT was roughly equal between the GA and VIS. However, the GA was more strongly visualized than was the VIS on MRI due to iron containing.

  2. Advantageous new conic cannula for spine cement injection.

    PubMed

    González, Sergio Gómez; Vlad, María Daniela; López, José López; Aguado, Enrique Fernández

    2014-09-01

    Experimental study to characterize the influence of the cannula geometry on both, the pressure drop and the cement flow velocity established along the cannula. To investigate how the new experimental geometry of cannulas can affect the extravertebral injection pressure and the velocity profiles established along the cannula during the injection process. Vertebroplasty procedure is being used to treat vertebral compression fractures. Vertebra infiltration is favored by the use of suitable: (1) syringes or injector devices; (2) polymer or ceramic bone cements; and (3) cannulas. However, the clinical use of ceramic bone cement has been limited due to press-filtering problems. Thus, new approaches concerning the cannula geometry are needed to minimize the press-filtering of calcium phosphate-based bone cements and thereby broaden its possible applications. Straight, conic, and combined conic-straight new cannulas with different proximal and distal both length and diameter ratios were drawn with computer-assisted design software. The new geometries were theoretically analyzed by: (1) Hagen-Poisseuille law; and (2) computational fluid dynamics. Some experimental models were manufactured and tested for extrusion in order to confirm and further advance the theoretical results. The results confirm that the totally conic cannula model, having proximal to distal diameter ratio equal 2, requires the lowest injection pressure. Furthermore, its velocity profile showed no discontinuity at all along the cannula length, compared with other known combined proximal and distal straight cannulas, where discontinuity was produced at the proximal-distal transition zone. The conclusion is that the conic cannulas: (a) further reduced the extravertebral pressure during the injection process; (b) showed optimum fluid flow velocity profiles to minimize filter-pressing problems, especially when ceramic cements are used; and (c) can be easily manufactured. In this sense, the new conic cannulas should favor the use of calcium phosphate bone cements in the spine. N/A.

  3. Computer-Assisted Total Knee Arthroplasty: Is There a Difference Between Image-Based and Imageless Techniques?

    PubMed

    Tabatabaee, Reza M; Rasouli, Mohammad R; Maltenfort, Mitchell G; Fuino, Robert; Restrepo, Camilo; Oliashirazi, Ali

    2018-04-01

    Image-based and imageless computer-assisted total knee arthroplasty (CATKA) has become increasingly popular. This study aims to compare outcomes, including perioperative complications and transfusion rate, between CATKA and conventional total knee arthroplasty (TKA), as well as between image-based and imageless CATKA. Using the 9th revision of the International Classification of Diseases codes, we queried the Nationwide Inpatient Sample database from 2005 to 2011 to identify unilateral conventional TKA, image-based, and imageless CATKAs as well as in-hospital complications and transfusion rates. A total of 787,809 conventional TKAs and 13,246 CATKAs (1055 image-based and 12,191 imageless) were identified. The rate of CATKA increased 23.13% per year from 2005 to 2011. Transfusion rates in conventional TKA and CATKA cases were 11.73% and 8.20% respectively (P < .001) and 6.92% in image-based vs 8.27% in imageless (P = .023). Perioperative complications occurred in 4.50%, 3.47%, and 3.41% of cases after conventional, imageless, and imaged-based CATKAs, respectively. Using multivariate analysis, perioperative complications were significantly higher in conventional TKA compared to CATKA (odds ratio = 1.17, 95% confidence interval 1.03-1.33, P = .01). There was no significant difference between imageless and image-based CATKA (P = .34). Length of hospital stay and hospital charges were not significantly different between groups (P > .05). CATKA has low complication rates and may improve patient outcomes after TKA. CATKA, especially the image-based technique, may reduce in-hospital complications and transfusion without increasing hospital charges and length of hospital stay significantly. Large prospective studies with long follow-up are required to verify potential benefits of CATKA. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. CORY: A Computer Program for Determining Dimension Stock Yields

    Treesearch

    Charles C Brunner; Marshall S. White; Fred M. Lamb; James G. Schroeder

    1989-01-01

    CORY is a computer program that calculates random-width, fixed-length cutting yields and best sawing sequences for either rip- or crosscut-first operations. It differs from other yield calculating programs by evaluating competing cuttings through conflict resolution models. Comparisons with Program YIELD resulted in a 9 percent greater cutting volume and a 98 percent...

  5. Heavy quark propagation in an AdS/CFT plasma

    DOE PAGES

    Casalderrey-Solana, J.

    2008-12-01

    We compute the momentum broadening of a heavy probe in N = 4 super-symmetric Yang-Mills in the large number of colors limit and strong coupling. The mean momentum transferred squared per unit length, k, is expressed in terms of derivatives of a Wilson line. This definition is used to compute κ via the AdS/CFT correspondence.

  6. High channel density wavelength division multiplexer with defined diffracting means positioning

    DOEpatents

    Jannson, Tomasz P.; Jannson, Joanna L.; Yeung, Peter C.

    1990-01-01

    A wavelength division multiplexer/demultiplexer having optical path lengths between a fiber array and a Fourier transform lens, and between a dispersion grating and the lens equal to the focal length of the lens. The optical path lengths reduce losses due to angular acceptance mismatch in the multiplexer. Close orientation of the fiber array about the optical axis and the use of a holographic dispersion grating reduces other losses in the system. Multi-exposure holographic dispersion gratings enable the multiplexer/demultiplexer for extremely broad-band simultaneous transmission and reflection operation. Individual Bragg plane sets recorded in the grating are dedicated to and operate efficiently on discrete wavelength ranges.

  7. Reducing Length of Stay in Total Joint Arthroplasty Care.

    PubMed

    Walters, Megan; Chambers, Monique C; Sayeed, Zain; Anoushiravani, Afshin A; El-Othmani, Mouhanad M; Saleh, Khaled J

    2016-10-01

    As health care reforms continue to improve quality of care, significant emphasis will be placed on evaluation of orthopedic patient outcomes. Total joint arthroplasty (TJA) has a proven track record of enhancing patient quality of life and are easily replicable. The outcomes of these procedures serve as a measure of health care initiative success. Specifically, length of stay, will be targeted as a marker of quality of surgical care delivered to TJA patients. Within this review, we will discuss preoperative and postoperative methods by which orthopedic surgeons may enhance TJA outcomes and effectively reduce length of stay. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Tapered laser rods as a means of minimizing the path length of trapped barrel mode rays

    DOEpatents

    Beach, Raymond J.; Honea, Eric C.; Payne, Stephen A.; Mercer, Ian; Perry, Michael D.

    2005-08-30

    By tapering the diameter of a flanged barrel laser rod over its length, the maximum trapped path length of a barrel mode can be dramatically reduced, thereby reducing the ability of the trapped spontaneous emission to negatively impact laser performance through amplified spontaneous emission (ASE). Laser rods with polished barrels and flanged end caps have found increasing application in diode array end-pumped laser systems. The polished barrel of the rod serves to confine diode array pump light within the rod. In systems utilizing an end-pumping geometry and such polished barrel laser rods, the pump light that is introduced into one or both ends of the laser rod, is ducted down the length of the rod via the total internal reflections (TIRs) that occur when the light strikes the rod's barrel. A disadvantage of using polished barrel laser rods is that such rods are very susceptible to barrel mode paths that can trap spontaneous emission over long path lengths. This trapped spontaneous emission can then be amplified through stimulated emission resulting in a situation where the stored energy available to the desired lasing mode is effectively depleted, which then negatively impacts the laser's performance, a result that is effectively reduced by introducing a taper onto the laser rod.

  9. The Coast Artillery Journal. Volume 66, Number 6, June 1927

    DTIC Science & Technology

    1927-06-01

    on the fuse setter it serves to determine the length of the fuse powder train. With the firing tables as a basis, data computers are constructed to...continuously by means of a wind computer and applied on the guns. In trial fire, therefore, wind effects are carefnlly eliminated before the corrections...are computed . (2) Change in the muzzle velocity. This result may he due to 488 THE COAST ARTILLERY JOURNAL change in the quality, weight, or

  10. Cost Analysis of the Addition of Hyperacute Magnetic Resonance Imaging for Selection of Patients for Endovascular Stroke Therapy.

    PubMed

    John, Seby; Thompson, Nicolas R; Lesko, Terry; Papesh, Nancy; Obuchowski, Nancy; Tomic, Dan; Wisco, Dolora; Khawaja, Zeshaun; Uchino, Ken; Man, Shumei; Cheng-Ching, Esteban; Toth, Gabor; Masaryk, Thomas; Ruggieri, Paul; Modic, Michael; Hussain, Muhammad Shazam

    2017-10-01

    Patient selection is important to determine the best candidates for endovascular stroke therapy. In application of a hyperacute magnetic resonance imaging (MRI) protocol for patient selection, we have shown decreased utilization with improved outcomes. A cost analysis comparing the pre- and post-MRI protocol time periods was performed to determine if the previous findings translated into cost opportunities. We retrospectively identified individuals considered for endovascular stroke therapy from January 2008 to August 2012 who were ≤8 h from stroke symptoms onset. Patients prior to April 30, 2010 were selected based on results of the computed tomography/computed tomography angiography alone (pre-hyperacute), whereas patients after April 30, 2010 were selected based on results of MRI (post-hyperacute MRI). Demographic, outcome, and financial information was collected. Log-transformed average daily direct costs were regressed on time period. The regression model included demographic and clinical covariates as potential confounders. Multiple imputation was used to account for missing data. We identified 267 patients in our database (88 patients in pre-hyperacute MRI period, 179 in hyperacute MRI protocol period). Patient length of stay was not significantly different in the hyperacute MRI protocol period as compared to the pre-hyperacute MRI period (10.6 vs. 9.9 days, p < 0.42). The median of average daily direct costs was reduced by 24.5% (95% confidence interval 14.1-33.7%, p < 0.001). Use of the hyperacute MRI protocol translated into reduced costs, in addition to reduced utilization and better outcomes. MRI selection of patients is an effective strategy, both for patients and hospital systems.

  11. Decorrelation of the static and dynamic length scales in hard-sphere glass formers.

    PubMed

    Charbonneau, Patrick; Tarjus, Gilles

    2013-04-01

    We show that, in the equilibrium phase of glass-forming hard-sphere fluids in three dimensions, the static length scales tentatively associated with the dynamical slowdown and the dynamical length characterizing spatial heterogeneities in the dynamics unambiguously decorrelate. The former grow at a much slower rate than the latter when density increases. This observation is valid for the dynamical range that is accessible to computer simulations, which roughly corresponds to that accessible in colloidal experiments. We also find that, in this same range, no one-to-one correspondence between relaxation time and point-to-set correlation length exists. These results point to the coexistence of several relaxation mechanisms in the dynamically accessible regime of three-dimensional hard-sphere glass formers.

  12. Large eddy simulations of a transcritical round jet submitted to transverse acoustic modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Flesca, M.; CNES DLA, 52 Rue Jacques Hillairet, 75612 Paris Cedex; Schmitt, T.

    This article reports numerical computations of a turbulent round jet of transcritical fluid (low temperature nitrogen injected under high pressure conditions) surrounded by the same fluid at rest under supercritical conditions (high temperature and high pressure) and submitted to transverse acoustic modulations. The numerical framework relies on large eddy simulation in combination with a real-gas description of thermodynamics and transport properties. A stationary acoustic field is obtained by modulating the normal acoustic velocity at the lateral boundaries of the computational domain. This study specifically focuses on the interaction of the jet with the acoustic field to investigate how the roundmore » transcritical jet changes its shape and mixes with the surrounding fluid. Different modulation amplitudes and frequencies are used to sweep a range of conditions. When the acoustic field is established in the domain, the jet length is notably reduced and the jet is flattened in the spanwise direction. Two regimes of oscillation are identified: for low Strouhal numbers a large amplitude motion is observed, while for higher Strouhal numbers the jet oscillates with a small amplitude around the injector axis. The minimum length is obtained for a Strouhal number of 0.3 and the jet length increases with increasing Strouhal numbers after reaching this minimum value. The mechanism of spanwise deformation is shown to be linked with dynamical effects resulting from reduction of the pressure in the transverse direction in relation with increased velocities on the two sides of the jet. A propagative wave is then introduced in the domain leading to similar effects on the jet, except that a bending is also observed in the acoustic propagation direction. A kinematic model, combining hydrodynamic and acoustic contributions, is derived in a second stage to represent the motion of the jet centerline. This model captures details of the numerical simulations quite well. These various results can serve to interpret observations made on more complex flow configurations such as coaxial jets or jet flames formed by coaxial injectors.« less

  13. Quantitative evaluation of the relationship between dorsal wall length, sole thickness, and rotation of the distal phalanx in the bovine claw using computed tomography.

    PubMed

    Tsuka, T; Murahata, Y; Azuma, K; Osaki, T; Ito, N; Okamoto, Y; Imagawa, T

    2014-10-01

    Computed tomography (CT) was performed on 800 untrimmed claws (400 inner claws and 400 outer claws) of 200 pairs of bovine hindlimbs to investigate the relationships between dorsal wall length and sole thickness, and between dorsal wall length and the relative rotation angle of distal phalanx-to-sole surface (S-D angle). Sole thickness was 3.8 and 4.0 mm at the apex of the inner claws and outer claws, respectively, with dorsal wall lengths <70 mm. These sole thickness values were less than the critical limit of 5 mm, which is associated with a softer surface following thinning of the soles. A sole thickness of 5 mm at the apex was estimated to correlate with dorsal wall lengths of 72.1 and 72.7 mm for the inner and outer claws, respectively. Sole thickness was 6.1 and 6.4 mm at the apex of the inner and outer claws, respectively, with dorsal wall lengths of 75 mm. These sole thickness values were less than the recommended sole thickness of 7 mm based on the protective function of the soles. A sole thickness >7 mm at the apex was estimated to correlate with a dorsal wall length of 79.8 and 78.4mm for the inner and outer claws, respectively. The S-D angles were recorded as anteversions of 2.9° and 4.7° for the inner and outer claws, respectively, with a dorsal wall length of 75 mm. These values indicate that the distal phalanx is likely to have rotated naturally forward toward the sole surface. The distal phalanx rotated backward to the sole surface at 3.2° and 7.6° for inner claws with dorsal wall lengths of 90-99 and ≥100 mm, respectively; and at 3.5° for outer claws with a dorsal wall length ≥100 mm. Dorsal wall lengths of 85.7 and 97.2 mm were estimated to correlate with a parallel positional relationship of the distal phalanx to the sole surface in the inner and outer claws, respectively. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Development of Procedures for Computing Site Seismicity

    DTIC Science & Technology

    1993-02-01

    surface wave magnitude when in the range of 5 to 7.5. REFERENCES Ambraseys, N.N. (1970). "Some characteristic features of the Anatolian fault zone...geology seismicity and environmental impact, Association of Engineering Geologists , Special Publication. Los Angeles, CA, University Publishers, 1973... Geologists ) Recurrenc.e Recurrence Slip Intervals (yr) at Intervals (yr) over Fault Rate Length a Point on Fault Length of Fault (cm/yI) (km) (Rý) (R

  15. Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar (registered trademark)-Fiber-Reinforced Polymer-Matrix Composites

    DTIC Science & Technology

    2013-03-01

    of coarser-scale materials and structures containing Kevlar fibers (e.g., yarns, fabrics, plies, lamina, and laminates ). Journal of Materials...Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar -Fiber-Reinforced Polymer-Matrix Composites M. Grujicic, B. Pandurangan, J.S...extensive set of molecular-level computational analyses regarding the role of various microstructural/morphological defects on the Kevlar fiber

  16. Wave envelope technique for multimode wave guide problems

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Sudharsanan, S. I.

    1986-01-01

    A fast method for solving wave guide problems is proposed. In particular, the guide is considered to be inhomogeneous allowing propagation of waves of higher order modes. Such problems have been handled successfully for acoustic wave propagation problems with single mode and finite length. This paper extends this concept to electromagnetic wave guides with several modes and infinite length. The method is described and results of computations are presented.

  17. Horizontal Directional Drilling-Length Detection Technology While Drilling Based on Bi-Electro-Magnetic Sensing.

    PubMed

    Wang, Yudan; Wen, Guojun; Chen, Han

    2017-04-27

    The drilling length is an important parameter in the process of horizontal directional drilling (HDD) exploration and recovery, but there has been a lack of accurate, automatically obtained statistics regarding this parameter. Herein, a technique for real-time HDD length detection and a management system based on the electromagnetic detection method with a microprocessor and two magnetoresistive sensors employing the software LabVIEW are proposed. The basic principle is to detect the change in the magnetic-field strength near a current coil while the drill stem and drill-stem joint successively pass through the current coil forward or backward. The detection system consists of a hardware subsystem and a software subsystem. The hardware subsystem employs a single-chip microprocessor as the main controller. A current coil is installed in front of the clamping unit, and two magneto resistive sensors are installed on the sides of the coil symmetrically and perpendicular to the direction of movement of the drill pipe. Their responses are used to judge whether the drill-stem joint is passing through the clamping unit; then, the order of their responses is used to judge the movement direction. The software subsystem is composed of a visual software running on the host computer and a software running in the slave microprocessor. The host-computer software processes, displays, and saves the drilling-length data, whereas the slave microprocessor software operates the hardware system. A combined test demonstrated the feasibility of the entire drilling-length detection system.

  18. Horizontal Directional Drilling-Length Detection Technology While Drilling Based on Bi-Electro-Magnetic Sensing

    PubMed Central

    Wang, Yudan; Wen, Guojun; Chen, Han

    2017-01-01

    The drilling length is an important parameter in the process of horizontal directional drilling (HDD) exploration and recovery, but there has been a lack of accurate, automatically obtained statistics regarding this parameter. Herein, a technique for real-time HDD length detection and a management system based on the electromagnetic detection method with a microprocessor and two magnetoresistive sensors employing the software LabVIEW are proposed. The basic principle is to detect the change in the magnetic-field strength near a current coil while the drill stem and drill-stem joint successively pass through the current coil forward or backward. The detection system consists of a hardware subsystem and a software subsystem. The hardware subsystem employs a single-chip microprocessor as the main controller. A current coil is installed in front of the clamping unit, and two magneto resistive sensors are installed on the sides of the coil symmetrically and perpendicular to the direction of movement of the drill pipe. Their responses are used to judge whether the drill-stem joint is passing through the clamping unit; then, the order of their responses is used to judge the movement direction. The software subsystem is composed of a visual software running on the host computer and a software running in the slave microprocessor. The host-computer software processes, displays, and saves the drilling-length data, whereas the slave microprocessor software operates the hardware system. A combined test demonstrated the feasibility of the entire drilling-length detection system. PMID:28448445

  19. Do Branch Lengths Help to Locate a Tree in a Phylogenetic Network?

    PubMed

    Gambette, Philippe; van Iersel, Leo; Kelk, Steven; Pardi, Fabio; Scornavacca, Celine

    2016-09-01

    Phylogenetic networks are increasingly used in evolutionary biology to represent the history of species that have undergone reticulate events such as horizontal gene transfer, hybrid speciation and recombination. One of the most fundamental questions that arise in this context is whether the evolution of a gene with one copy in all species can be explained by a given network. In mathematical terms, this is often translated in the following way: is a given phylogenetic tree contained in a given phylogenetic network? Recently this tree containment problem has been widely investigated from a computational perspective, but most studies have only focused on the topology of the phylogenies, ignoring a piece of information that, in the case of phylogenetic trees, is routinely inferred by evolutionary analyses: branch lengths. These measure the amount of change (e.g., nucleotide substitutions) that has occurred along each branch of the phylogeny. Here, we study a number of versions of the tree containment problem that explicitly account for branch lengths. We show that, although length information has the potential to locate more precisely a tree within a network, the problem is computationally hard in its most general form. On a positive note, for a number of special cases of biological relevance, we provide algorithms that solve this problem efficiently. This includes the case of networks of limited complexity, for which it is possible to recover, among the trees contained by the network with the same topology as the input tree, the closest one in terms of branch lengths.

  20. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

Top